Test Report: Docker_Linux 18943

                    
                      ef04194c9271b2044affaa93fa59a9d17158e937:2024-05-22:34582
                    
                

Test fail (27/342)

Order failed test Duration
159 TestMultiControlPlane/serial/StartCluster 218.21
160 TestMultiControlPlane/serial/DeployApp 720.72
161 TestMultiControlPlane/serial/PingHostFromPods 2.16
162 TestMultiControlPlane/serial/AddWorkerNode 1.67
164 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.86
166 TestMultiControlPlane/serial/StopSecondaryNode 3.02
167 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.85
168 TestMultiControlPlane/serial/RestartSecondaryNode 162.54
169 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.89
170 TestMultiControlPlane/serial/RestartClusterKeepsNodes 215
171 TestMultiControlPlane/serial/DeleteSecondaryNode 1.73
172 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.61
173 TestMultiControlPlane/serial/StopCluster 2.42
174 TestMultiControlPlane/serial/RestartCluster 225.35
175 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.63
176 TestMultiControlPlane/serial/AddSecondaryNode 1.49
177 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.62
233 TestMultiNode/serial/FreshStart2Nodes 248.19
234 TestMultiNode/serial/DeployApp2Nodes 706.31
235 TestMultiNode/serial/PingHostFrom2Pods 2.26
236 TestMultiNode/serial/AddNode 247.13
240 TestMultiNode/serial/StopNode 3.49
241 TestMultiNode/serial/StartAfterStop 162.58
242 TestMultiNode/serial/RestartKeepsNodes 137.53
243 TestMultiNode/serial/DeleteNode 107.97
244 TestMultiNode/serial/StopMultiNode 12.07
245 TestMultiNode/serial/RestartMultiNode 121.89
x
+
TestMultiControlPlane/serial/StartCluster (218.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0522 17:53:05.801797   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:53:46.762545   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:55:08.683507   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:101: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: exit status 80 (3m36.588944394s)

                                                
                                                
-- stdout --
	* [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Stopping node "ha-828033-m02"  ...
	* Powering off "ha-828033-m02" via SSH ...
	* Deleting "ha-828033-m02" in docker ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	* 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-linux-amd64 start -p ha-828033 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                  Args                                   |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-164981 image load --daemon                                   | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| license        |                                                                         | minikube          | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| update-context | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| update-context | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | update-context                                                          |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                  |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| image          | functional-164981 image load --daemon                                   | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| image          | functional-164981 image load --daemon                                   | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| image          | functional-164981 image save                                            | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image rm                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| image          | functional-164981 image load                                            | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| image          | functional-164981 image save --daemon                                   | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-164981                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| ssh            | functional-164981 ssh pgrep                                             | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC |                     |
	|                | buildkitd                                                               |                   |         |         |                     |                     |
	| image          | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC |                     |
	|                | image ls --format json                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC |                     |
	|                | image ls --format short                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | image ls --format yaml                                                  |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981                                                       | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | image ls --format table                                                 |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                       |                   |         |         |                     |                     |
	| image          | functional-164981 image build -t                                        | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	|                | localhost/my-image:functional-164981                                    |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                        |                   |         |         |                     |                     |
	| image          | functional-164981 image ls                                              | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| delete         | -p functional-164981                                                    | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| start          | -p ha-828033 --wait=true                                                | ha-828033         | jenkins | v1.33.1 | 22 May 24 17:52 UTC |                     |
	|                | --memory=2200 --ha                                                      |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                  |                   |         |         |                     |                     |
	|                | --driver=docker                                                         |                   |         |         |                     |                     |
	|                | --container-runtime=docker                                              |                   |         |         |                     |                     |
	|----------------|-------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c61ca7a89838df4da2bace0fa74ffeab37fcf68c1bd7b502ff0191f46ba59f4/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ca6a020652c5315e9cdab62b3f33c6eff8881ec5e8ef8003e9735e75932adcc6/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1648bcaea393a0b5ddfbf0f768d5e989217a09977f420bcefe5d82554e1e83fe/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/06f42956ef3cd3359e1bcca52e41ff3b2048fb4c3c75f96636ea439a7ffe37c9/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:05 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:05Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4d7edccdc49b22ec9cc59e71bc3d4f4089c78b1b448eab3c8012fc9a32dfc290/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:08 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:08Z" level=info msg="Stop pulling image ghcr.io/kube-vip/kube-vip:v0.8.0: Status: Downloaded newer image for ghcr.io/kube-vip/kube-vip:v0.8.0"
	May 22 17:53:25 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7920c4e0230819f5c621ee8ab19a8bd59c1053a4c4c9148fc2ab7993a5422497/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 17:53:25 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7caff96cd793b86249a3872a817399fd83ab776260c99c039376f84ba3c96e89/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:30Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
	May 22 17:53:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.399087699Z" level=info msg="ignoring event" container=dd5bd702646a46de165f70b974819728d0d1e4dcd480a756580f462132b4e49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.477631846Z" level=info msg="ignoring event" container=8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.558216062Z" level=info msg="ignoring event" container=63f49aaadee913b978ed9eff66b35c52ee24c7ed5fb7c74f4c3fc76578c0f4a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.607461254Z" level=info msg="ignoring event" container=91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                       2 minutes ago       Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8            2 minutes ago       Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                       3 minutes ago       Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                       3 minutes ago       Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                       3 minutes ago       Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                       3 minutes ago       Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   3 minutes ago       Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                       3 minutes ago       Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                       3 minutes ago       Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                       3 minutes ago       Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                       3 minutes ago       Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 17:56:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 17:53:42 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 17:53:42 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 17:53:42 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 17:53:42 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m4s
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m4s
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m17s
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m4s
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m17s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m3s   kube-proxy       
	  Normal  Starting                 3m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m17s  kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m17s  kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m17s  kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m17s  kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           3m5s   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.056485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-22T17:53:06.056617Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T17:53:06.057265Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T17:53:06.057494Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T17:53:06.057531Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T17:53:06.0576Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> kernel <==
	 17:56:29 up 38 min,  0 users,  load average: 0.37, 0.98, 0.67
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 17:54:21.579210       1 main.go:227] handling current node
	I0522 17:54:31.591187       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:54:31.591210       1 main.go:227] handling current node
	I0522 17:54:41.594946       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:54:41.594968       1 main.go:227] handling current node
	I0522 17:54:51.606297       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:54:51.606319       1 main.go:227] handling current node
	I0522 17:55:01.609679       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:01.609707       1 main.go:227] handling current node
	I0522 17:55:11.622453       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:11.622494       1 main.go:227] handling current node
	I0522 17:55:21.625731       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:21.625756       1 main.go:227] handling current node
	I0522 17:55:31.637651       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:31.637673       1 main.go:227] handling current node
	I0522 17:55:41.650145       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:41.650169       1 main.go:227] handling current node
	I0522 17:55:51.658448       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:55:51.658468       1 main.go:227] handling current node
	I0522 17:56:01.662230       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:56:01.662257       1 main.go:227] handling current node
	I0522 17:56:11.674509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:56:11.674537       1 main.go:227] handling current node
	I0522 17:56:21.686569       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 17:56:21.686592       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143916       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0522 17:53:09.143936       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0522 17:53:09.143944       1 shared_informer.go:320] Caches are synced for configmaps
	I0522 17:53:09.143872       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0522 17:53:09.143956       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.564624       1 shared_informer.go:320] Caches are synced for endpoint
	I0522 17:53:24.564652       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0522 17:53:24.564727       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-828033"
	I0522 17:53:24.564763       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0522 17:53:24.564846       1 shared_informer.go:320] Caches are synced for job
	I0522 17:53:24.564925       1 shared_informer.go:320] Caches are synced for attach detach
	I0522 17:53:24.565192       1 shared_informer.go:320] Caches are synced for deployment
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345758    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e82964e-040d-419c-969e-e89b79f50b09-lib-modules\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345873    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e82964e-040d-419c-969e-e89b79f50b09-kube-proxy\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345899    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e82964e-040d-419c-969e-e89b79f50b09-xtables-lock\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (218.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (720.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- rollout status deployment/busybox
E0522 17:56:55.310031   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.315310   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.325534   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.345793   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.386020   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.466294   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.626660   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:55.947216   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:56.588118   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:56:57.868452   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:57:00.429150   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:57:05.549644   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:57:15.790659   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:57:24.838549   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:57:36.271002   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:57:52.526857   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:58:17.232946   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 17:59:39.153507   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:01:55.310177   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:02:22.994610   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:02:24.838502   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:133: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- rollout status deployment/busybox: exit status 1 (10m3.087790492s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0522 18:06:55.310063   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0522 18:07:24.838638   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.io: exit status 1 (104.703421ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cw6wc does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-cw6wc could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-nhhq2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.io: exit status 1 (103.413713ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-x4bg9 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-x4bg9 could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.default: exit status 1 (106.986958ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cw6wc does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-cw6wc could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-nhhq2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.default: exit status 1 (104.234382ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-x4bg9 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-x4bg9 could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (102.752267ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cw6wc does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-cw6wc could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-nhhq2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (106.035911ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-x4bg9 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-x4bg9 could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-164981                 | functional-164981 | jenkins | v1.33.1 | 22 May 24 17:52 UTC | 22 May 24 17:52 UTC |
	| start   | -p ha-828033 --wait=true             | ha-828033         | jenkins | v1.33.1 | 22 May 24 17:52 UTC |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=docker                      |                   |         |         |                     |                     |
	|         | --container-runtime=docker           |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- apply -f             | ha-828033         | jenkins | v1.33.1 | 22 May 24 17:56 UTC | 22 May 24 17:56 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- rollout status       | ha-828033         | jenkins | v1.33.1 | 22 May 24 17:56 UTC |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033         | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:26 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:26Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:27 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:27Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 17:53:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:30Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
	May 22 17:53:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:32Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.399087699Z" level=info msg="ignoring event" container=dd5bd702646a46de165f70b974819728d0d1e4dcd480a756580f462132b4e49b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.477631846Z" level=info msg="ignoring event" container=8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.558216062Z" level=info msg="ignoring event" container=63f49aaadee913b978ed9eff66b35c52ee24c7ed5fb7c74f4c3fc76578c0f4a1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 dockerd[1209]: time="2024-05-22T17:53:39.607461254Z" level=info msg="ignoring event" container=91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:56:28 ha-828033 dockerd[1209]: 2024/05/22 17:56:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc32c92f2fa0451f2154953804d41863edba21af2f870a0567808c1f52d63863/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 17:56:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              14 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:29 up 50 min,  0 users,  load average: 1.05, 0.66, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:22.119488       1 main.go:227] handling current node
	I0522 18:06:32.123105       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:32.123127       1 main.go:227] handling current node
	I0522 18:06:42.126205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143872       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0522 17:53:09.143956       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  107s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  107s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (720.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-cw6wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (110.220596ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cw6wc does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-cw6wc could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-nhhq2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-nhhq2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p ha-828033 -- exec busybox-fc5497c4f-x4bg9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (103.434825ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-x4bg9 does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-x4bg9 could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 17:53:39 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:53:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 17:56:28 ha-828033 dockerd[1209]: 2024/05/22 17:56:28 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc32c92f2fa0451f2154953804d41863edba21af2f870a0567808c1f52d63863/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 17:56:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:30 ha-828033 dockerd[1209]: 2024/05/22 18:08:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:32 up 50 min,  0 users,  load average: 1.05, 0.66, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:22.119488       1 main.go:227] handling current node
	I0522 18:06:32.123105       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:32.123127       1 main.go:227] handling current node
	I0522 18:06:42.126205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  109s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  109s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (2.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-828033 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-828033 -v=7 --alsologtostderr: exit status 50 (121.391229ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:08:32.703954   83789 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:08:32.704089   83789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:32.704098   83789 out.go:304] Setting ErrFile to fd 2...
	I0522 18:08:32.704103   83789 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:32.704265   83789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:08:32.704519   83789 mustload.go:65] Loading cluster: ha-828033
	I0522 18:08:32.704914   83789 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:08:32.705319   83789 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:08:32.722379   83789 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:08:32.722625   83789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:08:32.765437   83789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:08:32.756679362 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:08:32.765783   83789 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:32.780473   83789 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:08:32.783073   83789 out.go:177] 
	W0522 18:08:32.784412   83789 out.go:239] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	W0522 18:08:32.784450   83789 out.go:239] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I0522 18:08:32.785677   83789 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-828033 -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 17:56:29 ha-828033 dockerd[1209]: 2024/05/22 17:56:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 17:56:30 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bc32c92f2fa0451f2154953804d41863edba21af2f870a0567808c1f52d63863/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 17:56:32 ha-828033 cri-dockerd[1428]: time="2024-05-22T17:56:32Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:30 ha-828033 dockerd[1209]: 2024/05/22 18:08:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:33 up 50 min,  0 users,  load average: 1.05, 0.66, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:32.123127       1 main.go:227] handling current node
	I0522 18:06:42.126205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	I0522 18:08:32.186386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:32.186409       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  111s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  111s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-828033" in json of 'profile list' to include 4 nodes but have 2 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfs
shares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02
\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"Soc
ketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-828033" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShares
Root\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\
"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath
\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterClusterStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:06 UTC | 22 May 24 18:06 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:07 UTC | 22 May 24 18:07 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1            |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:29 ha-828033 dockerd[1209]: 2024/05/22 18:08:29 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:30 ha-828033 dockerd[1209]: 2024/05/22 18:08:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:31 ha-828033 dockerd[1209]: 2024/05/22 18:08:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:35 up 50 min,  0 users,  load average: 1.05, 0.67, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:32.123127       1 main.go:227] handling current node
	I0522 18:06:42.126205       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	I0522 18:08:32.186386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:32.186409       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterClusterStart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  113s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  113s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterClusterStart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (3.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-828033 node stop m02 -v=7 --alsologtostderr: (1.157677234s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (283.805325ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:08:41.147906   86715 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:08:41.148178   86715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:41.148188   86715 out.go:304] Setting ErrFile to fd 2...
	I0522 18:08:41.148193   86715 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:41.148340   86715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:08:41.148505   86715 out.go:298] Setting JSON to false
	I0522 18:08:41.148531   86715 mustload.go:65] Loading cluster: ha-828033
	I0522 18:08:41.148583   86715 notify.go:220] Checking for updates...
	I0522 18:08:41.149012   86715 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:08:41.149032   86715 status.go:255] checking status of ha-828033 ...
	I0522 18:08:41.149474   86715 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:08:41.169096   86715 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:08:41.169118   86715 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:08:41.169345   86715 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:08:41.185324   86715 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:08:41.185639   86715 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:08:41.185694   86715 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:08:41.201443   86715 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:08:41.284063   86715 ssh_runner.go:195] Run: systemctl --version
	I0522 18:08:41.287736   86715 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:08:41.297533   86715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:08:41.345026   86715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:34 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-22 18:08:41.335641705 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:08:41.345682   86715 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:08:41.345712   86715 api_server.go:166] Checking apiserver status ...
	I0522 18:08:41.345748   86715 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:08:41.356116   86715 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:08:41.364294   86715 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:08:41.364345   86715 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:08:41.371852   86715 api_server.go:204] freezer state: "THAWED"
	I0522 18:08:41.371878   86715 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:08:41.376213   86715 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:08:41.376233   86715 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:08:41.376243   86715 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:08:41.376263   86715 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:08:41.376474   86715 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:41.393065   86715 status.go:330] ha-828033-m02 host status = "Stopped" (err=<nil>)
	I0522 18:08:41.393085   86715 status.go:343] host is not running, skipping remaining checks
	I0522 18:08:41.393104   86715 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr": ha-828033
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-828033-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr": ha-828033
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-828033-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr": ha-828033
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-828033-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr": ha-828033
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-828033-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --                                                       |           |         |         |                     |                     |
	|         | nslookup kubernetes.default                                                      |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o                                                      | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:32 ha-828033 dockerd[1209]: 2024/05/22 18:08:32 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:42 up 51 min,  0 users,  load average: 0.97, 0.66, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	I0522 18:08:32.186386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:32.186409       1 main.go:227] handling current node
	I0522 18:08:42.198337       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:42.198364       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  119s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  119s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (3.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-828033" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSha
resRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}
,{\"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientP
ath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 --                                                       |           |         |         |                     |                     |
	|         | nslookup kubernetes.default                                                      |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o                                                      | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:33 ha-828033 dockerd[1209]: 2024/05/22 18:08:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     15 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:08:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:08:44 up 51 min,  0 users,  load average: 0.97, 0.66, 0.54
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:06:42.126228       1 main.go:227] handling current node
	I0522 18:06:52.131861       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:06:52.131883       1 main.go:227] handling current node
	I0522 18:07:02.142017       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:02.142042       1 main.go:227] handling current node
	I0522 18:07:12.145450       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:12.145473       1 main.go:227] handling current node
	I0522 18:07:22.154163       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:22.154189       1 main.go:227] handling current node
	I0522 18:07:32.157408       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:32.157432       1 main.go:227] handling current node
	I0522 18:07:42.162820       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:42.162841       1 main.go:227] handling current node
	I0522 18:07:52.166119       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:07:52.166142       1 main.go:227] handling current node
	I0522 18:08:02.169586       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:02.169608       1 main.go:227] handling current node
	I0522 18:08:12.174539       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:12.174566       1 main.go:227] handling current node
	I0522 18:08:22.182207       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:22.182228       1 main.go:227] handling current node
	I0522 18:08:32.186386       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:32.186409       1 main.go:227] handling current node
	I0522 18:08:42.198337       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:08:42.198364       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m1s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                 From               Message
	  ----     ------            ----                ----               -------
	  Warning  FailedScheduling  2m1s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (162.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 node start m02 -v=7 --alsologtostderr
E0522 18:08:47.887774   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:420: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 node start m02 -v=7 --alsologtostderr: exit status 80 (1m47.721627375s)

                                                
                                                
-- stdout --
	* Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "ha-828033-m02" ...
	* Updating the running docker "ha-828033-m02" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:08:44.866631   88043 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:08:44.866885   88043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:44.866894   88043 out.go:304] Setting ErrFile to fd 2...
	I0522 18:08:44.866899   88043 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:44.867053   88043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:08:44.867263   88043 mustload.go:65] Loading cluster: ha-828033
	I0522 18:08:44.867612   88043 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:08:44.867949   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 18:08:44.883832   88043 host.go:58] "ha-828033-m02" host status: Stopped
	I0522 18:08:44.885995   88043 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:08:44.887384   88043 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:08:44.888566   88043 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:08:44.889682   88043 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:08:44.889726   88043 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:08:44.889749   88043 cache.go:56] Caching tarball of preloaded images
	I0522 18:08:44.889804   88043 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:08:44.889850   88043 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:08:44.889869   88043 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:08:44.890054   88043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:08:44.904767   88043 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:08:44.904786   88043 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:08:44.904808   88043 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:08:44.904848   88043 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:08:44.904934   88043 start.go:364] duration metric: took 57.555µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:08:44.904950   88043 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:08:44.904966   88043 fix.go:54] fixHost starting: m02
	I0522 18:08:44.905189   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:44.919714   88043 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:08:44.919738   88043 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:08:44.921508   88043 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:08:44.922762   88043 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:08:45.185535   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:45.202207   88043 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:08:45.202573   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:45.218530   88043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:08:45.218604   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:08:45.233986   88043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:08:45.234849   88043 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43020->127.0.0.1:32802: read: connection reset by peer
	I0522 18:08:45.234880   88043 retry.go:31] will retry after 261.70834ms: ssh: handshake failed: read tcp 127.0.0.1:43020->127.0.0.1:32802: read: connection reset by peer
	W0522 18:08:45.497714   88043 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:32802: read: connection reset by peer
	I0522 18:08:45.497744   88043 retry.go:31] will retry after 321.782547ms: ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:32802: read: connection reset by peer
	I0522 18:08:45.899624   88043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:08:45.903455   88043 fix.go:56] duration metric: took 998.487378ms for fixHost
	I0522 18:08:45.903477   88043 start.go:83] releasing machines lock for "ha-828033-m02", held for 998.533058ms
	W0522 18:08:45.903490   88043 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:08:45.903549   88043 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:08:45.903563   88043 start.go:728] Will try again in 5 seconds ...
	I0522 18:08:50.904464   88043 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:08:50.904578   88043 start.go:364] duration metric: took 80.459µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:08:50.904611   88043 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:08:50.904621   88043 fix.go:54] fixHost starting: m02
	I0522 18:08:50.904864   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:50.920613   88043 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:08:50.920636   88043 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:08:50.922449   88043 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:08:50.923536   88043 machine.go:94] provisionDockerMachine start ...
	I0522 18:08:50.923617   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:08:50.939921   88043 main.go:141] libmachine: Using SSH client type: native
	I0522 18:08:50.940115   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
	I0522 18:08:50.940128   88043 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:08:51.050405   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:08:51.050457   88043 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:08:51.050523   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:08:51.066244   88043 main.go:141] libmachine: Using SSH client type: native
	I0522 18:08:51.066431   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
	I0522 18:08:51.066445   88043 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:08:51.189547   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:08:51.189607   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:08:51.205722   88043 main.go:141] libmachine: Using SSH client type: native
	I0522 18:08:51.205888   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
	I0522 18:08:51.205904   88043 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:08:51.314807   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:08:51.314847   88043 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:08:51.314885   88043 ubuntu.go:177] setting up certificates
	I0522 18:08:51.314901   88043 provision.go:84] configureAuth start
	I0522 18:08:51.314961   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.331045   88043 provision.go:87] duration metric: took 16.132516ms to configureAuth
	W0522 18:08:51.331066   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.331083   88043 retry.go:31] will retry after 147.661µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.332192   88043 provision.go:84] configureAuth start
	I0522 18:08:51.332246   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.347954   88043 provision.go:87] duration metric: took 15.745176ms to configureAuth
	W0522 18:08:51.347971   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.347986   88043 retry.go:31] will retry after 211.868µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.349091   88043 provision.go:84] configureAuth start
	I0522 18:08:51.349144   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.364243   88043 provision.go:87] duration metric: took 15.128147ms to configureAuth
	W0522 18:08:51.364261   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.364278   88043 retry.go:31] will retry after 337.097µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.365397   88043 provision.go:84] configureAuth start
	I0522 18:08:51.365453   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.381024   88043 provision.go:87] duration metric: took 15.610551ms to configureAuth
	W0522 18:08:51.381040   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.381064   88043 retry.go:31] will retry after 247.861µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.382172   88043 provision.go:84] configureAuth start
	I0522 18:08:51.382225   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.397461   88043 provision.go:87] duration metric: took 15.272425ms to configureAuth
	W0522 18:08:51.397478   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.397493   88043 retry.go:31] will retry after 426.464µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.398596   88043 provision.go:84] configureAuth start
	I0522 18:08:51.398654   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.413827   88043 provision.go:87] duration metric: took 15.214361ms to configureAuth
	W0522 18:08:51.413844   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.413859   88043 retry.go:31] will retry after 1.122957ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.416047   88043 provision.go:84] configureAuth start
	I0522 18:08:51.416105   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.430726   88043 provision.go:87] duration metric: took 14.657009ms to configureAuth
	W0522 18:08:51.430740   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.430755   88043 retry.go:31] will retry after 1.684627ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.432923   88043 provision.go:84] configureAuth start
	I0522 18:08:51.432982   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.448273   88043 provision.go:87] duration metric: took 15.332728ms to configureAuth
	W0522 18:08:51.448291   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.448305   88043 retry.go:31] will retry after 1.312687ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.450479   88043 provision.go:84] configureAuth start
	I0522 18:08:51.450543   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.465373   88043 provision.go:87] duration metric: took 14.876839ms to configureAuth
	W0522 18:08:51.465389   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.465409   88043 retry.go:31] will retry after 2.703435ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.468595   88043 provision.go:84] configureAuth start
	I0522 18:08:51.468655   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.484410   88043 provision.go:87] duration metric: took 15.799265ms to configureAuth
	W0522 18:08:51.484429   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.484448   88043 retry.go:31] will retry after 4.425958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.489678   88043 provision.go:84] configureAuth start
	I0522 18:08:51.489749   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.506939   88043 provision.go:87] duration metric: took 17.228297ms to configureAuth
	W0522 18:08:51.506956   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.506971   88043 retry.go:31] will retry after 6.39974ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.514160   88043 provision.go:84] configureAuth start
	I0522 18:08:51.514211   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.529276   88043 provision.go:87] duration metric: took 15.099983ms to configureAuth
	W0522 18:08:51.529305   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.529320   88043 retry.go:31] will retry after 9.541537ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.539498   88043 provision.go:84] configureAuth start
	I0522 18:08:51.539555   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.556257   88043 provision.go:87] duration metric: took 16.729482ms to configureAuth
	W0522 18:08:51.556277   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.556296   88043 retry.go:31] will retry after 7.274016ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.564490   88043 provision.go:84] configureAuth start
	I0522 18:08:51.564554   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.580303   88043 provision.go:87] duration metric: took 15.792211ms to configureAuth
	W0522 18:08:51.580325   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.580343   88043 retry.go:31] will retry after 15.274583ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.596543   88043 provision.go:84] configureAuth start
	I0522 18:08:51.596635   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.612727   88043 provision.go:87] duration metric: took 16.161318ms to configureAuth
	W0522 18:08:51.612745   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.612760   88043 retry.go:31] will retry after 35.764136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.648958   88043 provision.go:84] configureAuth start
	I0522 18:08:51.649039   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.664752   88043 provision.go:87] duration metric: took 15.769772ms to configureAuth
	W0522 18:08:51.664768   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.664783   88043 retry.go:31] will retry after 65.493036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.730988   88043 provision.go:84] configureAuth start
	I0522 18:08:51.731062   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.747148   88043 provision.go:87] duration metric: took 16.134937ms to configureAuth
	W0522 18:08:51.747166   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.747185   88043 retry.go:31] will retry after 87.502226ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.835436   88043 provision.go:84] configureAuth start
	I0522 18:08:51.835518   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.851387   88043 provision.go:87] duration metric: took 15.925597ms to configureAuth
	W0522 18:08:51.851408   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.851426   88043 retry.go:31] will retry after 121.671486ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.975020   88043 provision.go:84] configureAuth start
	I0522 18:08:51.975151   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:51.991884   88043 provision.go:87] duration metric: took 16.832427ms to configureAuth
	W0522 18:08:51.991903   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:51.991918   88043 retry.go:31] will retry after 192.321648ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.185287   88043 provision.go:84] configureAuth start
	I0522 18:08:52.185376   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:52.201926   88043 provision.go:87] duration metric: took 16.614218ms to configureAuth
	W0522 18:08:52.201944   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.201959   88043 retry.go:31] will retry after 154.477068ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.357266   88043 provision.go:84] configureAuth start
	I0522 18:08:52.357354   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:52.373909   88043 provision.go:87] duration metric: took 16.607155ms to configureAuth
	W0522 18:08:52.373926   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.373943   88043 retry.go:31] will retry after 207.76449ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.582334   88043 provision.go:84] configureAuth start
	I0522 18:08:52.582460   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:52.598666   88043 provision.go:87] duration metric: took 16.305998ms to configureAuth
	W0522 18:08:52.598683   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:52.598700   88043 retry.go:31] will retry after 661.728267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:53.261524   88043 provision.go:84] configureAuth start
	I0522 18:08:53.261622   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:53.277777   88043 provision.go:87] duration metric: took 16.215043ms to configureAuth
	W0522 18:08:53.277802   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:53.277818   88043 retry.go:31] will retry after 515.277419ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:53.793420   88043 provision.go:84] configureAuth start
	I0522 18:08:53.793496   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:53.810371   88043 provision.go:87] duration metric: took 16.924746ms to configureAuth
	W0522 18:08:53.810389   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:53.810406   88043 retry.go:31] will retry after 571.473073ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:54.382825   88043 provision.go:84] configureAuth start
	I0522 18:08:54.382954   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:54.398914   88043 provision.go:87] duration metric: took 16.06301ms to configureAuth
	W0522 18:08:54.398932   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:54.398949   88043 retry.go:31] will retry after 1.343351192s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:55.742345   88043 provision.go:84] configureAuth start
	I0522 18:08:55.742451   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:55.758108   88043 provision.go:87] duration metric: took 15.723568ms to configureAuth
	W0522 18:08:55.758126   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:55.758142   88043 retry.go:31] will retry after 3.488192923s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:59.247328   88043 provision.go:84] configureAuth start
	I0522 18:08:59.247422   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:08:59.264115   88043 provision.go:87] duration metric: took 16.761494ms to configureAuth
	W0522 18:08:59.264133   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:08:59.264148   88043 retry.go:31] will retry after 2.370997139s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:01.635328   88043 provision.go:84] configureAuth start
	I0522 18:09:01.635414   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:09:01.651885   88043 provision.go:87] duration metric: took 16.531617ms to configureAuth
	W0522 18:09:01.651904   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:01.651921   88043 retry.go:31] will retry after 3.669978799s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:05.322889   88043 provision.go:84] configureAuth start
	I0522 18:09:05.323002   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:09:05.338218   88043 provision.go:87] duration metric: took 15.300166ms to configureAuth
	W0522 18:09:05.338237   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:05.338258   88043 retry.go:31] will retry after 8.026240204s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:13.367340   88043 provision.go:84] configureAuth start
	I0522 18:09:13.367428   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:09:13.384088   88043 provision.go:87] duration metric: took 16.715781ms to configureAuth
	W0522 18:09:13.384105   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:13.384123   88043 retry.go:31] will retry after 11.584636309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:24.969860   88043 provision.go:84] configureAuth start
	I0522 18:09:24.969975   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:09:24.986531   88043 provision.go:87] duration metric: took 16.642786ms to configureAuth
	W0522 18:09:24.986561   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:24.986580   88043 retry.go:31] will retry after 26.496866718s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:51.485013   88043 provision.go:84] configureAuth start
	I0522 18:09:51.485137   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:09:51.501970   88043 provision.go:87] duration metric: took 16.930488ms to configureAuth
	W0522 18:09:51.501997   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:09:51.502017   88043 retry.go:31] will retry after 40.915032332s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:10:32.418142   88043 provision.go:84] configureAuth start
	I0522 18:10:32.418217   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:10:32.435030   88043 provision.go:87] duration metric: took 16.841322ms to configureAuth
	W0522 18:10:32.435051   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:10:32.435067   88043 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:10:32.435073   88043 machine.go:97] duration metric: took 1m41.511525475s to provisionDockerMachine
	I0522 18:10:32.435142   88043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:32.435187   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:10:32.453586   88043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:10:32.535707   88043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:10:32.539853   88043 fix.go:56] duration metric: took 1m41.635228072s for fixHost
	I0522 18:10:32.539878   88043 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m41.635283968s
	W0522 18:10:32.539961   88043 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:10:32.542193   88043 out.go:177] 
	W0522 18:10:32.543664   88043 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:10:32.543678   88043 out.go:239] * 
	* 
	W0522 18:10:32.546036   88043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:10:32.547518   88043 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:422: I0522 18:08:44.866631   88043 out.go:291] Setting OutFile to fd 1 ...
I0522 18:08:44.866885   88043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 18:08:44.866894   88043 out.go:304] Setting ErrFile to fd 2...
I0522 18:08:44.866899   88043 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 18:08:44.867053   88043 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 18:08:44.867263   88043 mustload.go:65] Loading cluster: ha-828033
I0522 18:08:44.867612   88043 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 18:08:44.867949   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
W0522 18:08:44.883832   88043 host.go:58] "ha-828033-m02" host status: Stopped
I0522 18:08:44.885995   88043 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
I0522 18:08:44.887384   88043 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 18:08:44.888566   88043 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 18:08:44.889682   88043 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 18:08:44.889726   88043 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0522 18:08:44.889749   88043 cache.go:56] Caching tarball of preloaded images
I0522 18:08:44.889804   88043 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 18:08:44.889850   88043 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 18:08:44.889869   88043 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 18:08:44.890054   88043 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
I0522 18:08:44.904767   88043 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 18:08:44.904786   88043 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 18:08:44.904808   88043 cache.go:194] Successfully downloaded all kic artifacts
I0522 18:08:44.904848   88043 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 18:08:44.904934   88043 start.go:364] duration metric: took 57.555µs to acquireMachinesLock for "ha-828033-m02"
I0522 18:08:44.904950   88043 start.go:96] Skipping create...Using existing machine configuration
I0522 18:08:44.904966   88043 fix.go:54] fixHost starting: m02
I0522 18:08:44.905189   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 18:08:44.919714   88043 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
W0522 18:08:44.919738   88043 fix.go:138] unexpected machine state, will restart: <nil>
I0522 18:08:44.921508   88043 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
I0522 18:08:44.922762   88043 cli_runner.go:164] Run: docker start ha-828033-m02
I0522 18:08:45.185535   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 18:08:45.202207   88043 kic.go:430] container "ha-828033-m02" state is running.
I0522 18:08:45.202573   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:45.218530   88043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 18:08:45.218604   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 18:08:45.233986   88043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
W0522 18:08:45.234849   88043 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43020->127.0.0.1:32802: read: connection reset by peer
I0522 18:08:45.234880   88043 retry.go:31] will retry after 261.70834ms: ssh: handshake failed: read tcp 127.0.0.1:43020->127.0.0.1:32802: read: connection reset by peer
W0522 18:08:45.497714   88043 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:32802: read: connection reset by peer
I0522 18:08:45.497744   88043 retry.go:31] will retry after 321.782547ms: ssh: handshake failed: read tcp 127.0.0.1:43034->127.0.0.1:32802: read: connection reset by peer
I0522 18:08:45.899624   88043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 18:08:45.903455   88043 fix.go:56] duration metric: took 998.487378ms for fixHost
I0522 18:08:45.903477   88043 start.go:83] releasing machines lock for "ha-828033-m02", held for 998.533058ms
W0522 18:08:45.903490   88043 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
W0522 18:08:45.903549   88043 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
I0522 18:08:45.903563   88043 start.go:728] Will try again in 5 seconds ...
I0522 18:08:50.904464   88043 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 18:08:50.904578   88043 start.go:364] duration metric: took 80.459µs to acquireMachinesLock for "ha-828033-m02"
I0522 18:08:50.904611   88043 start.go:96] Skipping create...Using existing machine configuration
I0522 18:08:50.904621   88043 fix.go:54] fixHost starting: m02
I0522 18:08:50.904864   88043 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
I0522 18:08:50.920613   88043 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
W0522 18:08:50.920636   88043 fix.go:138] unexpected machine state, will restart: <nil>
I0522 18:08:50.922449   88043 out.go:177] * Updating the running docker "ha-828033-m02" container ...
I0522 18:08:50.923536   88043 machine.go:94] provisionDockerMachine start ...
I0522 18:08:50.923617   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 18:08:50.939921   88043 main.go:141] libmachine: Using SSH client type: native
I0522 18:08:50.940115   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
I0522 18:08:50.940128   88043 main.go:141] libmachine: About to run SSH command:
hostname
I0522 18:08:51.050405   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02

                                                
                                                
I0522 18:08:51.050457   88043 ubuntu.go:169] provisioning hostname "ha-828033-m02"
I0522 18:08:51.050523   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 18:08:51.066244   88043 main.go:141] libmachine: Using SSH client type: native
I0522 18:08:51.066431   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
I0522 18:08:51.066445   88043 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
I0522 18:08:51.189547   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02

                                                
                                                
I0522 18:08:51.189607   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 18:08:51.205722   88043 main.go:141] libmachine: Using SSH client type: native
I0522 18:08:51.205888   88043 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32802 <nil> <nil>}
I0522 18:08:51.205904   88043 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0522 18:08:51.314807   88043 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0522 18:08:51.314847   88043 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 18:08:51.314885   88043 ubuntu.go:177] setting up certificates
I0522 18:08:51.314901   88043 provision.go:84] configureAuth start
I0522 18:08:51.314961   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.331045   88043 provision.go:87] duration metric: took 16.132516ms to configureAuth
W0522 18:08:51.331066   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.331083   88043 retry.go:31] will retry after 147.661µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.332192   88043 provision.go:84] configureAuth start
I0522 18:08:51.332246   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.347954   88043 provision.go:87] duration metric: took 15.745176ms to configureAuth
W0522 18:08:51.347971   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.347986   88043 retry.go:31] will retry after 211.868µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.349091   88043 provision.go:84] configureAuth start
I0522 18:08:51.349144   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.364243   88043 provision.go:87] duration metric: took 15.128147ms to configureAuth
W0522 18:08:51.364261   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.364278   88043 retry.go:31] will retry after 337.097µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.365397   88043 provision.go:84] configureAuth start
I0522 18:08:51.365453   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.381024   88043 provision.go:87] duration metric: took 15.610551ms to configureAuth
W0522 18:08:51.381040   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.381064   88043 retry.go:31] will retry after 247.861µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.382172   88043 provision.go:84] configureAuth start
I0522 18:08:51.382225   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.397461   88043 provision.go:87] duration metric: took 15.272425ms to configureAuth
W0522 18:08:51.397478   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.397493   88043 retry.go:31] will retry after 426.464µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.398596   88043 provision.go:84] configureAuth start
I0522 18:08:51.398654   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.413827   88043 provision.go:87] duration metric: took 15.214361ms to configureAuth
W0522 18:08:51.413844   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.413859   88043 retry.go:31] will retry after 1.122957ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.416047   88043 provision.go:84] configureAuth start
I0522 18:08:51.416105   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.430726   88043 provision.go:87] duration metric: took 14.657009ms to configureAuth
W0522 18:08:51.430740   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.430755   88043 retry.go:31] will retry after 1.684627ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.432923   88043 provision.go:84] configureAuth start
I0522 18:08:51.432982   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.448273   88043 provision.go:87] duration metric: took 15.332728ms to configureAuth
W0522 18:08:51.448291   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.448305   88043 retry.go:31] will retry after 1.312687ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.450479   88043 provision.go:84] configureAuth start
I0522 18:08:51.450543   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.465373   88043 provision.go:87] duration metric: took 14.876839ms to configureAuth
W0522 18:08:51.465389   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.465409   88043 retry.go:31] will retry after 2.703435ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.468595   88043 provision.go:84] configureAuth start
I0522 18:08:51.468655   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.484410   88043 provision.go:87] duration metric: took 15.799265ms to configureAuth
W0522 18:08:51.484429   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.484448   88043 retry.go:31] will retry after 4.425958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.489678   88043 provision.go:84] configureAuth start
I0522 18:08:51.489749   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.506939   88043 provision.go:87] duration metric: took 17.228297ms to configureAuth
W0522 18:08:51.506956   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.506971   88043 retry.go:31] will retry after 6.39974ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.514160   88043 provision.go:84] configureAuth start
I0522 18:08:51.514211   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.529276   88043 provision.go:87] duration metric: took 15.099983ms to configureAuth
W0522 18:08:51.529305   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.529320   88043 retry.go:31] will retry after 9.541537ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.539498   88043 provision.go:84] configureAuth start
I0522 18:08:51.539555   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.556257   88043 provision.go:87] duration metric: took 16.729482ms to configureAuth
W0522 18:08:51.556277   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.556296   88043 retry.go:31] will retry after 7.274016ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.564490   88043 provision.go:84] configureAuth start
I0522 18:08:51.564554   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.580303   88043 provision.go:87] duration metric: took 15.792211ms to configureAuth
W0522 18:08:51.580325   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.580343   88043 retry.go:31] will retry after 15.274583ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.596543   88043 provision.go:84] configureAuth start
I0522 18:08:51.596635   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.612727   88043 provision.go:87] duration metric: took 16.161318ms to configureAuth
W0522 18:08:51.612745   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.612760   88043 retry.go:31] will retry after 35.764136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.648958   88043 provision.go:84] configureAuth start
I0522 18:08:51.649039   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.664752   88043 provision.go:87] duration metric: took 15.769772ms to configureAuth
W0522 18:08:51.664768   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.664783   88043 retry.go:31] will retry after 65.493036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.730988   88043 provision.go:84] configureAuth start
I0522 18:08:51.731062   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.747148   88043 provision.go:87] duration metric: took 16.134937ms to configureAuth
W0522 18:08:51.747166   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.747185   88043 retry.go:31] will retry after 87.502226ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.835436   88043 provision.go:84] configureAuth start
I0522 18:08:51.835518   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.851387   88043 provision.go:87] duration metric: took 15.925597ms to configureAuth
W0522 18:08:51.851408   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.851426   88043 retry.go:31] will retry after 121.671486ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.975020   88043 provision.go:84] configureAuth start
I0522 18:08:51.975151   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:51.991884   88043 provision.go:87] duration metric: took 16.832427ms to configureAuth
W0522 18:08:51.991903   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:51.991918   88043 retry.go:31] will retry after 192.321648ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.185287   88043 provision.go:84] configureAuth start
I0522 18:08:52.185376   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:52.201926   88043 provision.go:87] duration metric: took 16.614218ms to configureAuth
W0522 18:08:52.201944   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.201959   88043 retry.go:31] will retry after 154.477068ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.357266   88043 provision.go:84] configureAuth start
I0522 18:08:52.357354   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:52.373909   88043 provision.go:87] duration metric: took 16.607155ms to configureAuth
W0522 18:08:52.373926   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.373943   88043 retry.go:31] will retry after 207.76449ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.582334   88043 provision.go:84] configureAuth start
I0522 18:08:52.582460   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:52.598666   88043 provision.go:87] duration metric: took 16.305998ms to configureAuth
W0522 18:08:52.598683   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:52.598700   88043 retry.go:31] will retry after 661.728267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:53.261524   88043 provision.go:84] configureAuth start
I0522 18:08:53.261622   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:53.277777   88043 provision.go:87] duration metric: took 16.215043ms to configureAuth
W0522 18:08:53.277802   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:53.277818   88043 retry.go:31] will retry after 515.277419ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:53.793420   88043 provision.go:84] configureAuth start
I0522 18:08:53.793496   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:53.810371   88043 provision.go:87] duration metric: took 16.924746ms to configureAuth
W0522 18:08:53.810389   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:53.810406   88043 retry.go:31] will retry after 571.473073ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:54.382825   88043 provision.go:84] configureAuth start
I0522 18:08:54.382954   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:54.398914   88043 provision.go:87] duration metric: took 16.06301ms to configureAuth
W0522 18:08:54.398932   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:54.398949   88043 retry.go:31] will retry after 1.343351192s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:55.742345   88043 provision.go:84] configureAuth start
I0522 18:08:55.742451   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:55.758108   88043 provision.go:87] duration metric: took 15.723568ms to configureAuth
W0522 18:08:55.758126   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:55.758142   88043 retry.go:31] will retry after 3.488192923s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:59.247328   88043 provision.go:84] configureAuth start
I0522 18:08:59.247422   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:08:59.264115   88043 provision.go:87] duration metric: took 16.761494ms to configureAuth
W0522 18:08:59.264133   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:08:59.264148   88043 retry.go:31] will retry after 2.370997139s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:01.635328   88043 provision.go:84] configureAuth start
I0522 18:09:01.635414   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:09:01.651885   88043 provision.go:87] duration metric: took 16.531617ms to configureAuth
W0522 18:09:01.651904   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:01.651921   88043 retry.go:31] will retry after 3.669978799s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:05.322889   88043 provision.go:84] configureAuth start
I0522 18:09:05.323002   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:09:05.338218   88043 provision.go:87] duration metric: took 15.300166ms to configureAuth
W0522 18:09:05.338237   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:05.338258   88043 retry.go:31] will retry after 8.026240204s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:13.367340   88043 provision.go:84] configureAuth start
I0522 18:09:13.367428   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:09:13.384088   88043 provision.go:87] duration metric: took 16.715781ms to configureAuth
W0522 18:09:13.384105   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:13.384123   88043 retry.go:31] will retry after 11.584636309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:24.969860   88043 provision.go:84] configureAuth start
I0522 18:09:24.969975   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:09:24.986531   88043 provision.go:87] duration metric: took 16.642786ms to configureAuth
W0522 18:09:24.986561   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:24.986580   88043 retry.go:31] will retry after 26.496866718s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:51.485013   88043 provision.go:84] configureAuth start
I0522 18:09:51.485137   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:09:51.501970   88043 provision.go:87] duration metric: took 16.930488ms to configureAuth
W0522 18:09:51.501997   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:09:51.502017   88043 retry.go:31] will retry after 40.915032332s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:10:32.418142   88043 provision.go:84] configureAuth start
I0522 18:10:32.418217   88043 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
I0522 18:10:32.435030   88043 provision.go:87] duration metric: took 16.841322ms to configureAuth
W0522 18:10:32.435051   88043 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:10:32.435067   88043 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:10:32.435073   88043 machine.go:97] duration metric: took 1m41.511525475s to provisionDockerMachine
I0522 18:10:32.435142   88043 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 18:10:32.435187   88043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
I0522 18:10:32.453586   88043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32802 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
I0522 18:10:32.535707   88043 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 18:10:32.539853   88043 fix.go:56] duration metric: took 1m41.635228072s for fixHost
I0522 18:10:32.539878   88043 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m41.635283968s
W0522 18:10:32.539961   88043 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:10:32.542193   88043 out.go:177] 
W0522 18:10:32.543664   88043 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
W0522 18:10:32.543678   88043 out.go:239] * 
* 
W0522 18:10:32.546036   88043 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_6a758bccf1d363a5d0799efcdea444172a621e97_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0522 18:10:32.547518   88043 out.go:177] 
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-linux-amd64 -p ha-828033 node start m02 -v=7 --alsologtostderr": exit status 80
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (293.766647ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:32.592563   89965 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:32.592816   89965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:32.592824   89965 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:32.592828   89965 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:32.593021   89965 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:32.593172   89965 out.go:298] Setting JSON to false
	I0522 18:10:32.593196   89965 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:32.593303   89965 notify.go:220] Checking for updates...
	I0522 18:10:32.593527   89965 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:32.593550   89965 status.go:255] checking status of ha-828033 ...
	I0522 18:10:32.593943   89965 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:32.611475   89965 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:32.611501   89965 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:32.611812   89965 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:32.627040   89965 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:32.627300   89965 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:32.627341   89965 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:32.643008   89965 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:32.724216   89965 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:32.727983   89965 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:32.737820   89965 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:32.782895   89965 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:32.774228405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:32.783439   89965 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:32.783466   89965 api_server.go:166] Checking apiserver status ...
	I0522 18:10:32.783494   89965 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:32.793953   89965 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:32.802158   89965 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:32.802204   89965 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:32.809286   89965 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:32.809308   89965 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:32.813983   89965 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:32.814005   89965 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:32.814016   89965 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:32.814030   89965 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:32.814261   89965 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:32.830256   89965 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:32.830276   89965 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:32.830550   89965 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:32.845582   89965 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:32.845604   89965 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:32.845616   89965 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (289.930182ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:33.653530   90083 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:33.653658   90083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:33.653671   90083 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:33.653678   90083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:33.653874   90083 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:33.654034   90083 out.go:298] Setting JSON to false
	I0522 18:10:33.654061   90083 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:33.654156   90083 notify.go:220] Checking for updates...
	I0522 18:10:33.654384   90083 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:33.654397   90083 status.go:255] checking status of ha-828033 ...
	I0522 18:10:33.654808   90083 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:33.672050   90083 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:33.672077   90083 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:33.672314   90083 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:33.687091   90083 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:33.687411   90083 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:33.687466   90083 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:33.702814   90083 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:33.783928   90083 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:33.787587   90083 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:33.797160   90083 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:33.842563   90083 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:33.834313687 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:33.843066   90083 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:33.843094   90083 api_server.go:166] Checking apiserver status ...
	I0522 18:10:33.843129   90083 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:33.853565   90083 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:33.861376   90083 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:33.861425   90083 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:33.868638   90083 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:33.868659   90083 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:33.872841   90083 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:33.872858   90083 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:33.872867   90083 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:33.872886   90083 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:33.873118   90083 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:33.888521   90083 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:33.888541   90083 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:33.888813   90083 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:33.904791   90083 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:33.904815   90083 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:33.904837   90083 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (293.305143ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:35.357191   90202 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:35.357297   90202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:35.357306   90202 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:35.357310   90202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:35.357487   90202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:35.357626   90202 out.go:298] Setting JSON to false
	I0522 18:10:35.357648   90202 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:35.357685   90202 notify.go:220] Checking for updates...
	I0522 18:10:35.357931   90202 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:35.357945   90202 status.go:255] checking status of ha-828033 ...
	I0522 18:10:35.358299   90202 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:35.374355   90202 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:35.374381   90202 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:35.374623   90202 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:35.389872   90202 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:35.390072   90202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:35.390118   90202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:35.405349   90202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:35.488138   90202 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:35.492252   90202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:35.502232   90202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:35.547405   90202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:35.538476526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:35.548014   90202 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:35.548047   90202 api_server.go:166] Checking apiserver status ...
	I0522 18:10:35.548085   90202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:35.558999   90202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:35.567367   90202 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:35.567428   90202 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:35.574475   90202 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:35.574497   90202 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:35.578815   90202 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:35.578837   90202 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:35.578846   90202 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:35.578861   90202 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:35.579084   90202 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:35.595188   90202 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:35.595210   90202 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:35.595483   90202 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:35.611121   90202 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:35.611145   90202 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:35.611162   90202 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (289.745594ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:38.214441   90349 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:38.214687   90349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:38.214696   90349 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:38.214700   90349 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:38.214857   90349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:38.215044   90349 out.go:298] Setting JSON to false
	I0522 18:10:38.215070   90349 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:38.215180   90349 notify.go:220] Checking for updates...
	I0522 18:10:38.215476   90349 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:38.215496   90349 status.go:255] checking status of ha-828033 ...
	I0522 18:10:38.215890   90349 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:38.232595   90349 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:38.232619   90349 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:38.232839   90349 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:38.248568   90349 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:38.248765   90349 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:38.248797   90349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:38.263699   90349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:38.343986   90349 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:38.347743   90349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:38.357364   90349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:38.404631   90349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:38.39611367 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:38.405340   90349 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:38.405373   90349 api_server.go:166] Checking apiserver status ...
	I0522 18:10:38.405413   90349 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:38.415883   90349 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:38.423882   90349 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:38.423945   90349 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:38.430938   90349 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:38.430960   90349 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:38.434543   90349 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:38.434566   90349 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:38.434576   90349 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:38.434593   90349 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:38.434813   90349 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:38.451128   90349 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:38.451147   90349 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:38.451393   90349 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:38.465809   90349 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:38.465842   90349 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:38.465860   90349 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (297.028043ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:41.656866   90469 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:41.656962   90469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:41.656970   90469 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:41.656974   90469 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:41.657151   90469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:41.657308   90469 out.go:298] Setting JSON to false
	I0522 18:10:41.657331   90469 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:41.657446   90469 notify.go:220] Checking for updates...
	I0522 18:10:41.657663   90469 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:41.657676   90469 status.go:255] checking status of ha-828033 ...
	I0522 18:10:41.658041   90469 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:41.675799   90469 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:41.675840   90469 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:41.676128   90469 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:41.691863   90469 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:41.692167   90469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:41.692218   90469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:41.709076   90469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:41.791891   90469 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:41.795656   90469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:41.805361   90469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:41.853564   90469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:41.845074137 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:41.854132   90469 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:41.854158   90469 api_server.go:166] Checking apiserver status ...
	I0522 18:10:41.854184   90469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:41.864727   90469 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:41.873004   90469 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:41.873073   90469 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:41.880483   90469 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:41.880511   90469 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:41.883991   90469 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:41.884012   90469 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:41.884023   90469 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:41.884037   90469 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:41.884244   90469 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:41.899594   90469 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:41.899613   90469 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:41.899885   90469 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:41.914534   90469 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:41.914567   90469 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:41.914594   90469 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (300.004391ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:47.812151   90642 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:47.812418   90642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:47.812430   90642 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:47.812437   90642 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:47.812600   90642 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:47.812809   90642 out.go:298] Setting JSON to false
	I0522 18:10:47.812840   90642 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:47.812940   90642 notify.go:220] Checking for updates...
	I0522 18:10:47.813188   90642 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:47.813203   90642 status.go:255] checking status of ha-828033 ...
	I0522 18:10:47.813647   90642 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:47.832812   90642 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:47.832854   90642 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:47.833111   90642 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:47.850708   90642 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:47.850966   90642 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:47.851009   90642 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:47.866314   90642 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:47.948048   90642 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:47.951831   90642 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:47.961206   90642 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:48.011095   90642 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:48.002093177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:48.011643   90642 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:48.011671   90642 api_server.go:166] Checking apiserver status ...
	I0522 18:10:48.011699   90642 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:48.022062   90642 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:48.030004   90642 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:48.030054   90642 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:48.037438   90642 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:48.037462   90642 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:48.040918   90642 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:48.040937   90642 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:48.040946   90642 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:48.040965   90642 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:48.041205   90642 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:48.057453   90642 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:48.057473   90642 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:48.057703   90642 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:48.073317   90642 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:48.073345   90642 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:48.073371   90642 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (289.256523ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:10:58.288685   90820 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:10:58.288958   90820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:58.288968   90820 out.go:304] Setting ErrFile to fd 2...
	I0522 18:10:58.288972   90820 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:10:58.289135   90820 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:10:58.289287   90820 out.go:298] Setting JSON to false
	I0522 18:10:58.289314   90820 mustload.go:65] Loading cluster: ha-828033
	I0522 18:10:58.289360   90820 notify.go:220] Checking for updates...
	I0522 18:10:58.289643   90820 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:10:58.289657   90820 status.go:255] checking status of ha-828033 ...
	I0522 18:10:58.290009   90820 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:10:58.308012   90820 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:10:58.308045   90820 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:58.308282   90820 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:10:58.323709   90820 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:10:58.323936   90820 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:10:58.323981   90820 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:10:58.339370   90820 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:10:58.420060   90820 ssh_runner.go:195] Run: systemctl --version
	I0522 18:10:58.423682   90820 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:10:58.433532   90820 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:10:58.477731   90820 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:10:58.46907712 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:10:58.478249   90820 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:10:58.478276   90820 api_server.go:166] Checking apiserver status ...
	I0522 18:10:58.478303   90820 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:10:58.488744   90820 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:10:58.496912   90820 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:10:58.496957   90820 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:10:58.503926   90820 api_server.go:204] freezer state: "THAWED"
	I0522 18:10:58.503948   90820 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:10:58.507341   90820 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:10:58.507360   90820 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:10:58.507371   90820 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:10:58.507389   90820 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:10:58.507629   90820 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:10:58.522707   90820 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:10:58.522723   90820 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:10:58.522959   90820 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:10:58.537357   90820 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:10:58.537389   90820 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:10:58.537402   90820 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (294.719208ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:11:11.495203   91000 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:11.495492   91000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:11.495502   91000 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:11.495506   91000 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:11.495656   91000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:11.495803   91000 out.go:298] Setting JSON to false
	I0522 18:11:11.495832   91000 mustload.go:65] Loading cluster: ha-828033
	I0522 18:11:11.495871   91000 notify.go:220] Checking for updates...
	I0522 18:11:11.496216   91000 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:11.496234   91000 status.go:255] checking status of ha-828033 ...
	I0522 18:11:11.496680   91000 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:11.514140   91000 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:11:11.514166   91000 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:11.514504   91000 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:11.529646   91000 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:11.529897   91000 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:11.529950   91000 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:11.545207   91000 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:11.627910   91000 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:11.631538   91000 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:11:11.641940   91000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:11.688397   91000 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:11:11.679694526 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:11.688997   91000 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:11:11.689029   91000 api_server.go:166] Checking apiserver status ...
	I0522 18:11:11.689072   91000 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:11:11.699626   91000 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:11:11.707658   91000 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:11:11.707723   91000 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:11:11.714898   91000 api_server.go:204] freezer state: "THAWED"
	I0522 18:11:11.714927   91000 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:11:11.718402   91000 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:11:11.718420   91000 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:11:11.718429   91000 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:11:11.718447   91000 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:11:11.718682   91000 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:11:11.734614   91000 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:11:11.734637   91000 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:11:11.734863   91000 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:11:11.750323   91000 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:11:11.750346   91000 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:11:11.750358   91000 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (297.166014ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:11:25.506768   91202 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:25.507042   91202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:25.507051   91202 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:25.507055   91202 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:25.507206   91202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:25.507384   91202 out.go:298] Setting JSON to false
	I0522 18:11:25.507410   91202 mustload.go:65] Loading cluster: ha-828033
	I0522 18:11:25.507547   91202 notify.go:220] Checking for updates...
	I0522 18:11:25.507850   91202 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:25.507886   91202 status.go:255] checking status of ha-828033 ...
	I0522 18:11:25.508403   91202 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:25.525159   91202 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:11:25.525184   91202 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:25.525411   91202 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:25.542903   91202 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:25.543210   91202 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:25.543255   91202 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:25.558578   91202 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:25.640263   91202 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:25.643967   91202 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:11:25.654041   91202 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:25.700045   91202 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:11:25.691450729 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:25.700607   91202 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:11:25.700637   91202 api_server.go:166] Checking apiserver status ...
	I0522 18:11:25.700669   91202 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:11:25.711189   91202 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:11:25.719316   91202 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:11:25.719386   91202 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:11:25.727792   91202 api_server.go:204] freezer state: "THAWED"
	I0522 18:11:25.727820   91202 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:11:25.731812   91202 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:11:25.731834   91202 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:11:25.731844   91202 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:11:25.731858   91202 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:11:25.732083   91202 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:11:25.748125   91202 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:11:25.748144   91202 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:11:25.748356   91202 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:11:25.764117   91202 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:11:25.764157   91202 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:11:25.764175   91202 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o                                                      | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:35 ha-828033 dockerd[1209]: 2024/05/22 18:08:35 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:43 ha-828033 dockerd[1209]: 2024/05/22 18:08:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              17 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         18 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         18 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         18 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         18 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     18 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         18 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         18 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         18 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         18 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:11:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     18m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:11:26 up 53 min,  0 users,  load average: 0.19, 0.44, 0.47
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m44s (x3 over 14m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m44s (x3 over 14m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (162.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-828033" in json of 'profile list' to include 4 nodes but have 2 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfs
shares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02
\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"Soc
ketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-828033" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShares
Root\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\
"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath
\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68363,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T17:52:56.86490163Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "214439a25e1a683e872846d9aae152cc27b978772ac9336727dd04a5fb05455d",
	            "SandboxKey": "/var/run/docker/netns/214439a25e1a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "0c279da997a250d17ea7ba275327b11579f75ecdb00cf9acaca6353d94077512",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9 -- nslookup                                              |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- get pods -o                                                      | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}'                                             |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:52:51
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:52:51.616388   67740 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:51.616660   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616670   67740 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:51.616674   67740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:51.616882   67740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:51.617455   67740 out.go:298] Setting JSON to false
	I0522 17:52:51.618613   67740 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2116,"bootTime":1716398256,"procs":498,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:51.618668   67740 start.go:139] virtualization: kvm guest
	I0522 17:52:51.620581   67740 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:51.621796   67740 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:51.622990   67740 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:51.621903   67740 notify.go:220] Checking for updates...
	I0522 17:52:51.625177   67740 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:51.626330   67740 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:51.627520   67740 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:51.628659   67740 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:51.629817   67740 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:51.650607   67740 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:51.650716   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.695998   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.687785691 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.696115   67740 docker.go:295] overlay module found
	I0522 17:52:51.697872   67740 out.go:177] * Using the docker driver based on user configuration
	I0522 17:52:51.699059   67740 start.go:297] selected driver: docker
	I0522 17:52:51.699080   67740 start.go:901] validating driver "docker" against <nil>
	I0522 17:52:51.699093   67740 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:51.699900   67740 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:51.745624   67740 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:44 SystemTime:2024-05-22 17:52:51.73730429 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:51.745821   67740 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:52:51.746041   67740 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 17:52:51.747482   67740 out.go:177] * Using Docker driver with root privileges
	I0522 17:52:51.748998   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:52:51.749011   67740 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 17:52:51.749020   67740 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 17:52:51.749077   67740 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:51.750256   67740 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 17:52:51.751326   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:52:51.752481   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:52:51.753555   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:51.753579   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:52:51.753585   67740 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:52:51.753627   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:52:51.753764   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:52:51.753779   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:52:51.754104   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:51.754126   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json: {Name:mk536dd8f64273be31005b58553b5cd1d6e6f69c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:52:51.769095   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:52:51.769113   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:52:51.769128   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:52:51.769147   67740 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:52:51.769223   67740 start.go:364] duration metric: took 61.25µs to acquireMachinesLock for "ha-828033"
	I0522 17:52:51.769243   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:52:51.769302   67740 start.go:125] createHost starting for "" (driver="docker")
	I0522 17:52:51.771035   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:52:51.771256   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:52:51.771318   67740 client.go:168] LocalClient.Create starting
	I0522 17:52:51.771394   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:52:51.771429   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771446   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771502   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:52:51.771520   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:52:51.771528   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:52:51.771801   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 17:52:51.786884   67740 cli_runner.go:211] docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 17:52:51.786972   67740 network_create.go:281] running [docker network inspect ha-828033] to gather additional debugging logs...
	I0522 17:52:51.787013   67740 cli_runner.go:164] Run: docker network inspect ha-828033
	W0522 17:52:51.801352   67740 cli_runner.go:211] docker network inspect ha-828033 returned with exit code 1
	I0522 17:52:51.801375   67740 network_create.go:284] error running [docker network inspect ha-828033]: docker network inspect ha-828033: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-828033 not found
	I0522 17:52:51.801394   67740 network_create.go:286] output of [docker network inspect ha-828033]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-828033 not found
	
	** /stderr **
	I0522 17:52:51.801476   67740 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:52:51.817609   67740 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001996110}
	I0522 17:52:51.817644   67740 network_create.go:124] attempt to create docker network ha-828033 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0522 17:52:51.817690   67740 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-828033 ha-828033
	I0522 17:52:51.866851   67740 network_create.go:108] docker network ha-828033 192.168.49.0/24 created
	I0522 17:52:51.866880   67740 kic.go:121] calculated static IP "192.168.49.2" for the "ha-828033" container
	I0522 17:52:51.866949   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:52:51.883567   67740 cli_runner.go:164] Run: docker volume create ha-828033 --label name.minikube.sigs.k8s.io=ha-828033 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:52:51.902679   67740 oci.go:103] Successfully created a docker volume ha-828033
	I0522 17:52:51.902766   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --entrypoint /usr/bin/test -v ha-828033:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:52:52.415715   67740 oci.go:107] Successfully prepared a docker volume ha-828033
	I0522 17:52:52.415766   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:52:52.415787   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:52:52.415843   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:52:56.549014   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.133117574s)
	I0522 17:52:56.549059   67740 kic.go:203] duration metric: took 4.133268991s to extract preloaded images to volume ...
	W0522 17:52:56.549215   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:52:56.549336   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:52:56.595962   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033 --name ha-828033 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033 --network ha-828033 --ip 192.168.49.2 --volume ha-828033:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:52:56.872425   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Running}}
	I0522 17:52:56.891462   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:56.907928   67740 cli_runner.go:164] Run: docker exec ha-828033 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:52:56.946756   67740 oci.go:144] the created container "ha-828033" has a running status.
	I0522 17:52:56.946795   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa...
	I0522 17:52:57.123336   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:52:57.123383   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:52:57.142261   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.162674   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:52:57.162700   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:52:57.249568   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:52:57.270001   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:52:57.270092   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.288870   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.289150   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.289175   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:52:57.494306   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.494336   67740 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 17:52:57.494406   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.511445   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.511684   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.511709   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 17:52:57.632360   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 17:52:57.632434   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:57.648419   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:57.648608   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:57.648626   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:52:57.762947   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:52:57.762976   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:52:57.762997   67740 ubuntu.go:177] setting up certificates
	I0522 17:52:57.763011   67740 provision.go:84] configureAuth start
	I0522 17:52:57.763069   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:57.779057   67740 provision.go:143] copyHostCerts
	I0522 17:52:57.779092   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779116   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 17:52:57.779121   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 17:52:57.779194   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 17:52:57.779293   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779410   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 17:52:57.779430   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 17:52:57.779491   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 17:52:57.779566   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779592   67740 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 17:52:57.779602   67740 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 17:52:57.779638   67740 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 17:52:57.779711   67740 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 17:52:58.158531   67740 provision.go:177] copyRemoteCerts
	I0522 17:52:58.158593   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 17:52:58.158628   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.174030   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:58.259047   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 17:52:58.259096   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 17:52:58.279107   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 17:52:58.279164   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 17:52:58.298603   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 17:52:58.298655   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 17:52:58.318081   67740 provision.go:87] duration metric: took 555.057584ms to configureAuth
	I0522 17:52:58.318107   67740 ubuntu.go:193] setting minikube options for container-runtime
	I0522 17:52:58.318262   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:58.318307   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.334537   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.334725   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.334739   67740 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 17:52:58.443317   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 17:52:58.443343   67740 ubuntu.go:71] root file system type: overlay
	I0522 17:52:58.443474   67740 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 17:52:58.443540   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.459128   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.459328   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.459387   67740 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 17:52:58.581102   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 17:52:58.581172   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:58.597436   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:52:58.597600   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32787 <nil> <nil>}
	I0522 17:52:58.597616   67740 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 17:52:59.221776   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 17:52:58.575464359 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 17:52:59.221804   67740 machine.go:97] duration metric: took 1.951777752s to provisionDockerMachine
	I0522 17:52:59.221825   67740 client.go:171] duration metric: took 7.450490051s to LocalClient.Create
	I0522 17:52:59.221846   67740 start.go:167] duration metric: took 7.450590188s to libmachine.API.Create "ha-828033"
	I0522 17:52:59.221855   67740 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 17:52:59.221867   67740 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 17:52:59.221924   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 17:52:59.221966   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.237240   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.323437   67740 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 17:52:59.326293   67740 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 17:52:59.326324   67740 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 17:52:59.326337   67740 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 17:52:59.326349   67740 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 17:52:59.326360   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 17:52:59.326404   67740 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 17:52:59.326472   67740 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 17:52:59.326481   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 17:52:59.326562   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 17:52:59.333825   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:52:59.354042   67740 start.go:296] duration metric: took 132.174455ms for postStartSetup
	I0522 17:52:59.354355   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.369659   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:52:59.369914   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:52:59.369957   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.385473   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.467652   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:52:59.471509   67740 start.go:128] duration metric: took 7.702195096s to createHost
	I0522 17:52:59.471529   67740 start.go:83] releasing machines lock for "ha-828033", held for 7.702295867s
	I0522 17:52:59.471577   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 17:52:59.487082   67740 ssh_runner.go:195] Run: cat /version.json
	I0522 17:52:59.487134   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.487143   67740 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 17:52:59.487207   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:52:59.502998   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.504153   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:52:59.582552   67740 ssh_runner.go:195] Run: systemctl --version
	I0522 17:52:59.586415   67740 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 17:52:59.653911   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 17:52:59.675707   67740 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 17:52:59.675785   67740 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 17:52:59.699419   67740 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 17:52:59.699447   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.699483   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.699592   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:52:59.713359   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 17:52:59.721747   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 17:52:59.729895   67740 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 17:52:59.729949   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 17:52:59.738288   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.746561   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 17:52:59.754810   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 17:52:59.762993   67740 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 17:52:59.770726   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 17:52:59.778920   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 17:52:59.787052   67740 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 17:52:59.795263   67740 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 17:52:59.802296   67740 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 17:52:59.809582   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:52:59.883276   67740 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 17:52:59.963129   67740 start.go:494] detecting cgroup driver to use...
	I0522 17:52:59.963176   67740 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 17:52:59.963243   67740 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 17:52:59.974498   67740 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 17:52:59.974562   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 17:52:59.984764   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 17:53:00.000654   67740 ssh_runner.go:195] Run: which cri-dockerd
	I0522 17:53:00.003744   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 17:53:00.011737   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 17:53:00.029748   67740 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 17:53:00.143798   67740 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 17:53:00.227819   67740 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 17:53:00.227952   67740 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 17:53:00.243383   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.315723   67740 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 17:53:00.537231   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 17:53:00.547492   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.557301   67740 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 17:53:00.636990   67740 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 17:53:00.707384   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.778889   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 17:53:00.790448   67740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 17:53:00.799716   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:00.871781   67740 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 17:53:00.927578   67740 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 17:53:00.927643   67740 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 17:53:00.930933   67740 start.go:562] Will wait 60s for crictl version
	I0522 17:53:00.930992   67740 ssh_runner.go:195] Run: which crictl
	I0522 17:53:00.934009   67740 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 17:53:00.964626   67740 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 17:53:00.964671   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:00.985746   67740 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 17:53:01.008319   67740 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 17:53:01.008394   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:01.024322   67740 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 17:53:01.027742   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.037471   67740 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 17:53:01.037581   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:01.037636   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.054459   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.054484   67740 docker.go:615] Images already preloaded, skipping extraction
	I0522 17:53:01.054533   67740 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 17:53:01.071182   67740 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 17:53:01.071199   67740 cache_images.go:84] Images are preloaded, skipping loading
	I0522 17:53:01.071214   67740 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 17:53:01.071337   67740 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 17:53:01.071392   67740 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 17:53:01.113042   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:01.113070   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:01.113090   67740 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 17:53:01.113121   67740 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 17:53:01.113296   67740 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 17:53:01.113320   67740 kube-vip.go:115] generating kube-vip config ...
	I0522 17:53:01.113376   67740 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 17:53:01.123923   67740 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 17:53:01.124031   67740 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0522 17:53:01.124082   67740 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 17:53:01.131476   67740 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 17:53:01.131533   67740 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 17:53:01.138724   67740 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 17:53:01.153627   67740 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 17:53:01.168501   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 17:53:01.183138   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0522 17:53:01.197801   67740 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 17:53:01.200669   67740 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 17:53:01.209778   67740 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 17:53:01.280341   67740 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 17:53:01.292055   67740 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 17:53:01.292076   67740 certs.go:194] generating shared ca certs ...
	I0522 17:53:01.292094   67740 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.292206   67740 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 17:53:01.292254   67740 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 17:53:01.292264   67740 certs.go:256] generating profile certs ...
	I0522 17:53:01.292307   67740 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 17:53:01.292319   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt with IP's: []
	I0522 17:53:01.356953   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt ...
	I0522 17:53:01.356984   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt: {Name:mk107af694da048ec96fb863990f78dd2f1cfdbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357149   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key ...
	I0522 17:53:01.357160   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key: {Name:mkf1e13d4f9700868add4d6cce143b650167d122 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.357241   67740 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36
	I0522 17:53:01.357257   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0522 17:53:01.556313   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 ...
	I0522 17:53:01.556340   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36: {Name:mkab4c373a3fffab576a8ea1d67e55afa225eeb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556500   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 ...
	I0522 17:53:01.556513   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36: {Name:mk7e54effde1d4509e26cfa435b194571ee47bb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.556580   67740 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 17:53:01.556650   67740 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.fe94fa36 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 17:53:01.556697   67740 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 17:53:01.556711   67740 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt with IP's: []
	I0522 17:53:01.630998   67740 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt ...
	I0522 17:53:01.631021   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt: {Name:mkeef06bb61e0ccc36361cc465c59f21e7bdea1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631157   67740 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key ...
	I0522 17:53:01.631168   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key: {Name:mkb9ab74b377711217a8c6b152f36c9fda7264a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:01.631230   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 17:53:01.631246   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 17:53:01.631260   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 17:53:01.631309   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 17:53:01.631328   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 17:53:01.631343   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 17:53:01.631356   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 17:53:01.631365   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 17:53:01.631417   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 17:53:01.631447   67740 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 17:53:01.631457   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 17:53:01.631479   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 17:53:01.631502   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 17:53:01.631523   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 17:53:01.631558   67740 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 17:53:01.631582   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.631597   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.631608   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.632128   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 17:53:01.652751   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 17:53:01.672560   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 17:53:01.691795   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 17:53:01.711301   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 17:53:01.731063   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 17:53:01.751064   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 17:53:01.770695   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 17:53:01.790410   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 17:53:01.814053   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 17:53:01.833703   67740 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 17:53:01.853223   67740 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 17:53:01.868213   67740 ssh_runner.go:195] Run: openssl version
	I0522 17:53:01.872673   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 17:53:01.880830   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883744   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.883792   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 17:53:01.889587   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 17:53:01.897227   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 17:53:01.904819   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907709   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.907753   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 17:53:01.913481   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 17:53:01.921278   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 17:53:01.929363   67740 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932295   67740 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.932352   67740 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 17:53:01.938436   67740 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 17:53:01.946360   67740 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 17:53:01.949115   67740 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 17:53:01.949164   67740 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:53:01.949252   67740 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 17:53:01.965541   67740 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 17:53:01.973093   67740 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 17:53:01.980229   67740 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 17:53:01.980270   67740 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 17:53:01.987751   67740 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 17:53:01.987768   67740 kubeadm.go:156] found existing configuration files:
	
	I0522 17:53:01.987805   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 17:53:01.994901   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 17:53:01.994936   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 17:53:02.001636   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 17:53:02.008534   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 17:53:02.008575   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 17:53:02.015362   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.022382   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 17:53:02.022417   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 17:53:02.029147   67740 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 17:53:02.036313   67740 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 17:53:02.036352   67740 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 17:53:02.043146   67740 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 17:53:02.083648   67740 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 17:53:02.083709   67740 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 17:53:02.119636   67740 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 17:53:02.119808   67740 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 17:53:02.119876   67740 kubeadm.go:309] OS: Linux
	I0522 17:53:02.119973   67740 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 17:53:02.120054   67740 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 17:53:02.120145   67740 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 17:53:02.120222   67740 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 17:53:02.120314   67740 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 17:53:02.120387   67740 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 17:53:02.120444   67740 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 17:53:02.120498   67740 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 17:53:02.120559   67740 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 17:53:02.176871   67740 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 17:53:02.177025   67740 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 17:53:02.177141   67740 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 17:53:02.372325   67740 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 17:53:02.375701   67740 out.go:204]   - Generating certificates and keys ...
	I0522 17:53:02.375812   67740 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 17:53:02.375935   67740 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 17:53:02.532924   67740 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 17:53:02.638523   67740 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 17:53:02.792671   67740 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 17:53:02.965135   67740 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 17:53:03.124232   67740 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 17:53:03.124354   67740 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.226994   67740 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 17:53:03.227194   67740 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-828033 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0522 17:53:03.284062   67740 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 17:53:03.587406   67740 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 17:53:03.694896   67740 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 17:53:03.695247   67740 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 17:53:03.870895   67740 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 17:53:04.007853   67740 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 17:53:04.078725   67740 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 17:53:04.260744   67740 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 17:53:04.365893   67740 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 17:53:04.366333   67740 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 17:53:04.368648   67740 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 17:53:04.370859   67740 out.go:204]   - Booting up control plane ...
	I0522 17:53:04.370979   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 17:53:04.371088   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 17:53:04.371171   67740 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 17:53:04.383092   67740 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 17:53:04.384599   67740 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 17:53:04.384838   67740 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 17:53:04.466492   67740 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 17:53:04.466604   67740 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 17:53:05.468427   67740 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002114893s
	I0522 17:53:05.468551   67740 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 17:53:11.141380   67740 kubeadm.go:309] [api-check] The API server is healthy after 5.672901996s
	I0522 17:53:11.152116   67740 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 17:53:11.161056   67740 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 17:53:11.678578   67740 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 17:53:11.678814   67740 kubeadm.go:309] [mark-control-plane] Marking the node ha-828033 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 17:53:11.685295   67740 kubeadm.go:309] [bootstrap-token] Using token: 5urei6.f9k1l0b1jzskzaeu
	I0522 17:53:11.686669   67740 out.go:204]   - Configuring RBAC rules ...
	I0522 17:53:11.686814   67740 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 17:53:11.689832   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 17:53:11.694718   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 17:53:11.699847   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 17:53:11.702108   67740 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 17:53:11.704239   67740 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 17:53:11.712550   67740 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 17:53:11.974533   67740 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 17:53:12.547008   67740 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 17:53:12.548083   67740 kubeadm.go:309] 
	I0522 17:53:12.548149   67740 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 17:53:12.548156   67740 kubeadm.go:309] 
	I0522 17:53:12.548253   67740 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 17:53:12.548267   67740 kubeadm.go:309] 
	I0522 17:53:12.548307   67740 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 17:53:12.548384   67740 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 17:53:12.548466   67740 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 17:53:12.548477   67740 kubeadm.go:309] 
	I0522 17:53:12.548545   67740 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 17:53:12.548559   67740 kubeadm.go:309] 
	I0522 17:53:12.548601   67740 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 17:53:12.548609   67740 kubeadm.go:309] 
	I0522 17:53:12.548648   67740 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 17:53:12.548713   67740 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 17:53:12.548778   67740 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 17:53:12.548785   67740 kubeadm.go:309] 
	I0522 17:53:12.548889   67740 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 17:53:12.548992   67740 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 17:53:12.549009   67740 kubeadm.go:309] 
	I0522 17:53:12.549123   67740 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549259   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 17:53:12.549291   67740 kubeadm.go:309] 	--control-plane 
	I0522 17:53:12.549300   67740 kubeadm.go:309] 
	I0522 17:53:12.549413   67740 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 17:53:12.549427   67740 kubeadm.go:309] 
	I0522 17:53:12.549530   67740 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 5urei6.f9k1l0b1jzskzaeu \
	I0522 17:53:12.549654   67740 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 17:53:12.551710   67740 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 17:53:12.551839   67740 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 17:53:12.551867   67740 cni.go:84] Creating CNI manager for ""
	I0522 17:53:12.551876   67740 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 17:53:12.553609   67740 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 17:53:12.554924   67740 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 17:53:12.558498   67740 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 17:53:12.558516   67740 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 17:53:12.574461   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 17:53:12.755502   67740 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 17:53:12.755579   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.755600   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-828033 minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=ha-828033 minikube.k8s.io/primary=true
	I0522 17:53:12.850109   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:12.855591   67740 ops.go:34] apiserver oom_adj: -16
	I0522 17:53:13.350585   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:13.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.350332   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:14.850482   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.350200   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:15.850568   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.350359   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:16.850559   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.350665   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:17.850775   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.351191   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:18.850358   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.351122   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:19.850171   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.350366   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:20.851051   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.350960   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:21.851014   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.350781   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:22.850795   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.350314   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:23.851155   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.351209   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.850179   67740 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 17:53:24.912848   67740 kubeadm.go:1107] duration metric: took 12.157331343s to wait for elevateKubeSystemPrivileges
	W0522 17:53:24.912892   67740 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 17:53:24.912903   67740 kubeadm.go:393] duration metric: took 22.9637422s to StartCluster
	I0522 17:53:24.912925   67740 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.912998   67740 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.913898   67740 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:53:24.914152   67740 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:24.914177   67740 start.go:240] waiting for startup goroutines ...
	I0522 17:53:24.914209   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 17:53:24.914186   67740 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 17:53:24.914247   67740 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 17:53:24.914265   67740 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 17:53:24.914280   67740 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 17:53:24.914303   67740 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 17:53:24.914307   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.914407   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:24.914687   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.914856   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.936661   67740 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 17:53:24.935358   67740 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:53:24.938027   67740 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:24.938051   67740 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 17:53:24.938104   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.938117   67740 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 17:53:24.938535   67740 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 17:53:24.938693   67740 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	I0522 17:53:24.938728   67740 host.go:66] Checking if "ha-828033" exists ...
	I0522 17:53:24.939066   67740 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 17:53:24.955478   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.964156   67740 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:24.964174   67740 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 17:53:24.964216   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 17:53:24.983375   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 17:53:24.987665   67740 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 17:53:25.061038   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 17:53:25.083441   67740 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 17:53:25.371936   67740 start.go:946] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0522 17:53:25.697836   67740 round_trippers.go:463] GET https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 17:53:25.697859   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.697869   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.697875   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750106   67740 round_trippers.go:574] Response Status: 200 OK in 52 milliseconds
	I0522 17:53:25.750738   67740 round_trippers.go:463] PUT https://192.168.49.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 17:53:25.750766   67740 round_trippers.go:469] Request Headers:
	I0522 17:53:25.750775   67740 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 17:53:25.750779   67740 round_trippers.go:473]     Accept: application/json, */*
	I0522 17:53:25.750781   67740 round_trippers.go:473]     Content-Type: application/json
	I0522 17:53:25.753047   67740 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 17:53:25.754766   67740 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0522 17:53:25.755957   67740 addons.go:505] duration metric: took 841.76495ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0522 17:53:25.755999   67740 start.go:245] waiting for cluster config update ...
	I0522 17:53:25.756022   67740 start.go:254] writing updated cluster config ...
	I0522 17:53:25.757404   67740 out.go:177] 
	I0522 17:53:25.758849   67740 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:53:25.758935   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.760603   67740 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 17:53:25.761714   67740 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:53:25.762872   67740 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:53:25.764352   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:25.764396   67740 cache.go:56] Caching tarball of preloaded images
	I0522 17:53:25.764446   67740 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:53:25.764489   67740 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 17:53:25.764505   67740 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 17:53:25.764593   67740 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 17:53:25.782684   67740 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 17:53:25.782710   67740 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 17:53:25.782728   67740 cache.go:194] Successfully downloaded all kic artifacts
	I0522 17:53:25.782765   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:53:25.782880   67740 start.go:364] duration metric: took 83.137µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:53:25.782911   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:53:25.783001   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:53:25.784711   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:53:25.784832   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:53:25.784852   67740 client.go:168] LocalClient.Create starting
	I0522 17:53:25.784917   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:53:25.784953   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.784985   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785059   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:53:25.785087   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:53:25.785100   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:53:25.785951   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:53:25.804785   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc00191cf30 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:53:25.804835   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:53:25.804904   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:53:25.823769   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:53:25.840603   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:53:25.840678   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:53:26.430644   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:53:26.430675   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:53:26.430699   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:53:26.430758   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:53:30.969362   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.538563584s)
	I0522 17:53:30.969399   67740 kic.go:203] duration metric: took 4.538697459s to extract preloaded images to volume ...
	W0522 17:53:30.969534   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:53:30.969649   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:53:31.025232   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:53:31.438620   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:53:31.457423   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.475562   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:53:31.519384   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:53:31.519414   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:53:31.724062   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:53:31.724104   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:53:31.751442   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.776640   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:53:31.776667   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:53:31.862090   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:53:31.891639   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:53:31.891731   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:31.917156   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:31.917467   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:31.917492   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:53:32.120712   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.120737   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:53:32.120785   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.137375   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.137553   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.137567   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:53:32.276420   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:53:32.276522   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:53:32.298553   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:53:32.298714   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32792 <nil> <nil>}
	I0522 17:53:32.298729   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:53:32.411237   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:53:32.411298   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:53:32.411322   67740 ubuntu.go:177] setting up certificates
	I0522 17:53:32.411342   67740 provision.go:84] configureAuth start
	I0522 17:53:32.411438   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.427815   67740 provision.go:87] duration metric: took 16.459419ms to configureAuth
	W0522 17:53:32.427838   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.427861   67740 retry.go:31] will retry after 99.984µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.428984   67740 provision.go:84] configureAuth start
	I0522 17:53:32.429054   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.445063   67740 provision.go:87] duration metric: took 16.057791ms to configureAuth
	W0522 17:53:32.445082   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.445102   67740 retry.go:31] will retry after 208.046µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.446175   67740 provision.go:84] configureAuth start
	I0522 17:53:32.446261   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.463887   67740 provision.go:87] duration metric: took 17.691272ms to configureAuth
	W0522 17:53:32.463912   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.463934   67740 retry.go:31] will retry after 199.015µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.465043   67740 provision.go:84] configureAuth start
	I0522 17:53:32.465105   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.486733   67740 provision.go:87] duration metric: took 21.670064ms to configureAuth
	W0522 17:53:32.486759   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.486781   67740 retry.go:31] will retry after 297.941µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.487897   67740 provision.go:84] configureAuth start
	I0522 17:53:32.487975   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.507152   67740 provision.go:87] duration metric: took 19.234225ms to configureAuth
	W0522 17:53:32.507176   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.507196   67740 retry.go:31] will retry after 745.775µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.508343   67740 provision.go:84] configureAuth start
	I0522 17:53:32.508443   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.525068   67740 provision.go:87] duration metric: took 16.703078ms to configureAuth
	W0522 17:53:32.525086   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.525106   67740 retry.go:31] will retry after 599.638µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.526213   67740 provision.go:84] configureAuth start
	I0522 17:53:32.526268   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.542838   67740 provision.go:87] duration metric: took 16.605804ms to configureAuth
	W0522 17:53:32.542858   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.542874   67740 retry.go:31] will retry after 1.661041ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.545050   67740 provision.go:84] configureAuth start
	I0522 17:53:32.545124   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.567428   67740 provision.go:87] duration metric: took 22.355702ms to configureAuth
	W0522 17:53:32.567454   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.567475   67740 retry.go:31] will retry after 2.108109ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.570624   67740 provision.go:84] configureAuth start
	I0522 17:53:32.570712   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.592038   67740 provision.go:87] duration metric: took 21.385415ms to configureAuth
	W0522 17:53:32.592083   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.592109   67740 retry.go:31] will retry after 2.355136ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.595345   67740 provision.go:84] configureAuth start
	I0522 17:53:32.595474   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.616428   67740 provision.go:87] duration metric: took 21.063557ms to configureAuth
	W0522 17:53:32.616444   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.616459   67740 retry.go:31] will retry after 2.728057ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.619645   67740 provision.go:84] configureAuth start
	I0522 17:53:32.619716   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.636068   67740 provision.go:87] duration metric: took 16.400565ms to configureAuth
	W0522 17:53:32.636089   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.636107   67740 retry.go:31] will retry after 4.374124ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.641290   67740 provision.go:84] configureAuth start
	I0522 17:53:32.641357   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.661778   67740 provision.go:87] duration metric: took 20.468934ms to configureAuth
	W0522 17:53:32.661802   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.661830   67740 retry.go:31] will retry after 8.99759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.671042   67740 provision.go:84] configureAuth start
	I0522 17:53:32.671123   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.691366   67740 provision.go:87] duration metric: took 20.298927ms to configureAuth
	W0522 17:53:32.691389   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.691409   67740 retry.go:31] will retry after 8.160386ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.700603   67740 provision.go:84] configureAuth start
	I0522 17:53:32.700678   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.720331   67740 provision.go:87] duration metric: took 19.708758ms to configureAuth
	W0522 17:53:32.720351   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.720370   67740 retry.go:31] will retry after 17.367544ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.738551   67740 provision.go:84] configureAuth start
	I0522 17:53:32.738628   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.759082   67740 provision.go:87] duration metric: took 20.511019ms to configureAuth
	W0522 17:53:32.759106   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.759126   67740 retry.go:31] will retry after 15.566976ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.775330   67740 provision.go:84] configureAuth start
	I0522 17:53:32.775414   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.794844   67740 provision.go:87] duration metric: took 19.490522ms to configureAuth
	W0522 17:53:32.794868   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.794890   67740 retry.go:31] will retry after 23.240317ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.819079   67740 provision.go:84] configureAuth start
	I0522 17:53:32.819159   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.834908   67740 provision.go:87] duration metric: took 15.798234ms to configureAuth
	W0522 17:53:32.834926   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.834943   67740 retry.go:31] will retry after 47.572088ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.883190   67740 provision.go:84] configureAuth start
	I0522 17:53:32.883335   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:32.904257   67740 provision.go:87] duration metric: took 21.017488ms to configureAuth
	W0522 17:53:32.904296   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:32.904322   67740 retry.go:31] will retry after 146.348345ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.051578   67740 provision.go:84] configureAuth start
	I0522 17:53:33.051698   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.071933   67740 provision.go:87] duration metric: took 20.324402ms to configureAuth
	W0522 17:53:33.071959   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.071983   67740 retry.go:31] will retry after 83.786289ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.156290   67740 provision.go:84] configureAuth start
	I0522 17:53:33.156396   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.176346   67740 provision.go:87] duration metric: took 20.024388ms to configureAuth
	W0522 17:53:33.176365   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.176388   67740 retry.go:31] will retry after 188.977656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.365590   67740 provision.go:84] configureAuth start
	I0522 17:53:33.365687   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.385235   67740 provision.go:87] duration metric: took 19.618338ms to configureAuth
	W0522 17:53:33.385262   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.385284   67740 retry.go:31] will retry after 372.297422ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.758500   67740 provision.go:84] configureAuth start
	I0522 17:53:33.758620   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:33.778278   67740 provision.go:87] duration metric: took 19.745956ms to configureAuth
	W0522 17:53:33.778300   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:33.778321   67740 retry.go:31] will retry after 420.930054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.199905   67740 provision.go:84] configureAuth start
	I0522 17:53:34.200025   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.220225   67740 provision.go:87] duration metric: took 20.271685ms to configureAuth
	W0522 17:53:34.220245   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.220261   67740 retry.go:31] will retry after 609.139566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.829960   67740 provision.go:84] configureAuth start
	I0522 17:53:34.830073   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:34.847415   67740 provision.go:87] duration metric: took 17.414439ms to configureAuth
	W0522 17:53:34.847434   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:34.847453   67740 retry.go:31] will retry after 1.378249793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.226841   67740 provision.go:84] configureAuth start
	I0522 17:53:36.226917   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:36.244043   67740 provision.go:87] duration metric: took 17.173768ms to configureAuth
	W0522 17:53:36.244065   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:36.244085   67740 retry.go:31] will retry after 915.566153ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.160064   67740 provision.go:84] configureAuth start
	I0522 17:53:37.160145   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:37.178672   67740 provision.go:87] duration metric: took 18.578279ms to configureAuth
	W0522 17:53:37.178703   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:37.178727   67740 retry.go:31] will retry after 1.823277401s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.003329   67740 provision.go:84] configureAuth start
	I0522 17:53:39.003413   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:39.022621   67740 provision.go:87] duration metric: took 19.266756ms to configureAuth
	W0522 17:53:39.022641   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:39.022658   67740 retry.go:31] will retry after 4.73403722s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.757466   67740 provision.go:84] configureAuth start
	I0522 17:53:43.757544   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:43.774236   67740 provision.go:87] duration metric: took 16.744416ms to configureAuth
	W0522 17:53:43.774257   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:43.774290   67740 retry.go:31] will retry after 6.02719967s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.804363   67740 provision.go:84] configureAuth start
	I0522 17:53:49.804470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:53:49.821435   67740 provision.go:87] duration metric: took 17.029431ms to configureAuth
	W0522 17:53:49.821471   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:53:49.821493   67740 retry.go:31] will retry after 11.229046488s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.052359   67740 provision.go:84] configureAuth start
	I0522 17:54:01.052467   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:01.068843   67740 provision.go:87] duration metric: took 16.436722ms to configureAuth
	W0522 17:54:01.068864   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:01.068886   67740 retry.go:31] will retry after 11.446957942s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.516410   67740 provision.go:84] configureAuth start
	I0522 17:54:12.516501   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:12.532588   67740 provision.go:87] duration metric: took 16.136044ms to configureAuth
	W0522 17:54:12.532612   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:12.532630   67740 retry.go:31] will retry after 11.131225111s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.664770   67740 provision.go:84] configureAuth start
	I0522 17:54:23.664874   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:23.681171   67740 provision.go:87] duration metric: took 16.370034ms to configureAuth
	W0522 17:54:23.681191   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:23.681208   67740 retry.go:31] will retry after 19.415128992s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.097052   67740 provision.go:84] configureAuth start
	I0522 17:54:43.097128   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:54:43.114019   67740 provision.go:87] duration metric: took 16.940583ms to configureAuth
	W0522 17:54:43.114038   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114058   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:43.114064   67740 machine.go:97] duration metric: took 1m11.222400206s to provisionDockerMachine
	I0522 17:54:43.114070   67740 client.go:171] duration metric: took 1m17.329214904s to LocalClient.Create
	I0522 17:54:45.114802   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:54:45.114851   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:45.131137   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32792 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:54:45.211800   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:54:45.215648   67740 start.go:128] duration metric: took 1m19.43263441s to createHost
	I0522 17:54:45.215668   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m19.432772722s
	W0522 17:54:45.215682   67740 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:45.216030   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:45.231847   67740 stop.go:39] StopHost: ha-828033-m02
	W0522 17:54:45.232101   67740 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.233821   67740 out.go:177] * Stopping node "ha-828033-m02"  ...
	I0522 17:54:45.235034   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	W0522 17:54:45.250648   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:45.252222   67740 out.go:177] * Powering off "ha-828033-m02" via SSH ...
	I0522 17:54:45.253375   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	I0522 17:54:46.310178   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.325583   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:46.325611   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:46.325618   67740 stop.go:96] shutdown container: err=<nil>
	I0522 17:54:46.325665   67740 main.go:141] libmachine: Stopping "ha-828033-m02"...
	I0522 17:54:46.325732   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:46.341372   67740 stop.go:66] stop err: Machine "ha-828033-m02" is already stopped.
	I0522 17:54:46.341401   67740 stop.go:69] host is already stopped
	W0522 17:54:47.341542   67740 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 17:54:47.343381   67740 out.go:177] * Deleting "ha-828033-m02" in docker ...
	I0522 17:54:47.344698   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	I0522 17:54:47.361099   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:47.376628   67740 cli_runner.go:164] Run: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0"
	W0522 17:54:47.392353   67740 cli_runner.go:211] docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 17:54:47.392393   67740 oci.go:650] error shutdown ha-828033-m02: docker exec --privileged -t ha-828033-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 3cc3e3e24d53966ce3e9b255e1c04504ec6e9b4b2d34e3fd546f7c2f3049902f is not running
	I0522 17:54:48.392556   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:48.408902   67740 oci.go:658] container ha-828033-m02 status is Stopped
	I0522 17:54:48.408930   67740 oci.go:670] Successfully shutdown container ha-828033-m02
	I0522 17:54:48.408985   67740 cli_runner.go:164] Run: docker rm -f -v ha-828033-m02
	I0522 17:54:48.429674   67740 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-828033-m02
	W0522 17:54:48.445584   67740 cli_runner.go:211] docker container inspect -f {{.Id}} ha-828033-m02 returned with exit code 1
	I0522 17:54:48.445652   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:48.460965   67740 cli_runner.go:164] Run: docker network rm ha-828033
	W0522 17:54:48.475541   67740 cli_runner.go:211] docker network rm ha-828033 returned with exit code 1
	W0522 17:54:48.475635   67740 kic.go:390] failed to remove network (which might be okay) ha-828033: unable to delete a network that is attached to a running container
	W0522 17:54:48.475837   67740 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:54:48.475849   67740 start.go:728] Will try again in 5 seconds ...
	I0522 17:54:53.476927   67740 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 17:54:53.477039   67740 start.go:364] duration metric: took 79.073µs to acquireMachinesLock for "ha-828033-m02"
	I0522 17:54:53.477066   67740 start.go:93] Provisioning new machine with config: &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 17:54:53.477162   67740 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 17:54:53.479034   67740 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 17:54:53.479153   67740 start.go:159] libmachine.API.Create for "ha-828033" (driver="docker")
	I0522 17:54:53.479185   67740 client.go:168] LocalClient.Create starting
	I0522 17:54:53.479249   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 17:54:53.479310   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479333   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479397   67740 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 17:54:53.479424   67740 main.go:141] libmachine: Decoding PEM data...
	I0522 17:54:53.479441   67740 main.go:141] libmachine: Parsing certificate...
	I0522 17:54:53.479649   67740 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 17:54:53.495874   67740 network_create.go:77] Found existing network {name:ha-828033 subnet:0xc001acd440 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0522 17:54:53.495903   67740 kic.go:121] calculated static IP "192.168.49.3" for the "ha-828033-m02" container
	I0522 17:54:53.495960   67740 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 17:54:53.511000   67740 cli_runner.go:164] Run: docker volume create ha-828033-m02 --label name.minikube.sigs.k8s.io=ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 17:54:53.526234   67740 oci.go:103] Successfully created a docker volume ha-828033-m02
	I0522 17:54:53.526311   67740 cli_runner.go:164] Run: docker run --rm --name ha-828033-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --entrypoint /usr/bin/test -v ha-828033-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 17:54:53.904691   67740 oci.go:107] Successfully prepared a docker volume ha-828033-m02
	I0522 17:54:53.904730   67740 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:54:53.904761   67740 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 17:54:53.904817   67740 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 17:54:58.186920   67740 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v ha-828033-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.282059818s)
	I0522 17:54:58.186951   67740 kic.go:203] duration metric: took 4.282198207s to extract preloaded images to volume ...
	W0522 17:54:58.187117   67740 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 17:54:58.187205   67740 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 17:54:58.233376   67740 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-828033-m02 --name ha-828033-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-828033-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-828033-m02 --network ha-828033 --ip 192.168.49.3 --volume ha-828033-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 17:54:58.523486   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Running}}
	I0522 17:54:58.540206   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.557874   67740 cli_runner.go:164] Run: docker exec ha-828033-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 17:54:58.597167   67740 oci.go:144] the created container "ha-828033-m02" has a running status.
	I0522 17:54:58.597198   67740 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa...
	I0522 17:54:58.715099   67740 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 17:54:58.715136   67740 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 17:54:58.734167   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.752454   67740 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 17:54:58.752480   67740 kic_runner.go:114] Args: [docker exec --privileged ha-828033-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 17:54:58.793632   67740 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 17:54:58.811842   67740 machine.go:94] provisionDockerMachine start ...
	I0522 17:54:58.811942   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:54:58.831262   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:54:58.831524   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:54:58.831543   67740 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 17:54:58.832166   67740 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40570->127.0.0.1:32797: read: connection reset by peer
	I0522 17:55:01.950656   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:01.950684   67740 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 17:55:01.950756   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:01.967254   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:01.967478   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:01.967497   67740 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 17:55:02.089579   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 17:55:02.089655   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:55:02.105960   67740 main.go:141] libmachine: Using SSH client type: native
	I0522 17:55:02.106178   67740 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32797 <nil> <nil>}
	I0522 17:55:02.106203   67740 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 17:55:02.219113   67740 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 17:55:02.219142   67740 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 17:55:02.219165   67740 ubuntu.go:177] setting up certificates
	I0522 17:55:02.219178   67740 provision.go:84] configureAuth start
	I0522 17:55:02.219229   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.235165   67740 provision.go:87] duration metric: took 15.978249ms to configureAuth
	W0522 17:55:02.235185   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.235202   67740 retry.go:31] will retry after 111.405µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.236316   67740 provision.go:84] configureAuth start
	I0522 17:55:02.236371   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.251579   67740 provision.go:87] duration metric: took 15.244801ms to configureAuth
	W0522 17:55:02.251596   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.251612   67740 retry.go:31] will retry after 121.831µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.252718   67740 provision.go:84] configureAuth start
	I0522 17:55:02.252781   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.268254   67740 provision.go:87] duration metric: took 15.517955ms to configureAuth
	W0522 17:55:02.268272   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.268289   67740 retry.go:31] will retry after 122.468µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.269405   67740 provision.go:84] configureAuth start
	I0522 17:55:02.269470   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.286410   67740 provision.go:87] duration metric: took 16.987035ms to configureAuth
	W0522 17:55:02.286429   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.286450   67740 retry.go:31] will retry after 407.867µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.287564   67740 provision.go:84] configureAuth start
	I0522 17:55:02.287622   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.302324   67740 provision.go:87] duration metric: took 14.743181ms to configureAuth
	W0522 17:55:02.302338   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.302353   67740 retry.go:31] will retry after 682.441µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.303472   67740 provision.go:84] configureAuth start
	I0522 17:55:02.303536   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.318179   67740 provision.go:87] duration metric: took 14.688319ms to configureAuth
	W0522 17:55:02.318196   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.318213   67740 retry.go:31] will retry after 740.096µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.319311   67740 provision.go:84] configureAuth start
	I0522 17:55:02.319362   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.333371   67740 provision.go:87] duration metric: took 14.043622ms to configureAuth
	W0522 17:55:02.333386   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.333402   67740 retry.go:31] will retry after 794.169µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.334517   67740 provision.go:84] configureAuth start
	I0522 17:55:02.334581   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.350167   67740 provision.go:87] duration metric: took 15.635141ms to configureAuth
	W0522 17:55:02.350182   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.350198   67740 retry.go:31] will retry after 1.884267ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.352398   67740 provision.go:84] configureAuth start
	I0522 17:55:02.352452   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.368273   67740 provision.go:87] duration metric: took 15.856327ms to configureAuth
	W0522 17:55:02.368295   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.368312   67740 retry.go:31] will retry after 2.946487ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.371499   67740 provision.go:84] configureAuth start
	I0522 17:55:02.371558   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.386648   67740 provision.go:87] duration metric: took 15.13195ms to configureAuth
	W0522 17:55:02.386668   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.386686   67740 retry.go:31] will retry after 3.738526ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.390865   67740 provision.go:84] configureAuth start
	I0522 17:55:02.390919   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.406987   67740 provision.go:87] duration metric: took 16.104393ms to configureAuth
	W0522 17:55:02.407002   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.407015   67740 retry.go:31] will retry after 6.575896ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.414192   67740 provision.go:84] configureAuth start
	I0522 17:55:02.414252   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.428668   67740 provision.go:87] duration metric: took 14.459146ms to configureAuth
	W0522 17:55:02.428682   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.428697   67740 retry.go:31] will retry after 8.970723ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.437877   67740 provision.go:84] configureAuth start
	I0522 17:55:02.437947   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.454233   67740 provision.go:87] duration metric: took 16.335255ms to configureAuth
	W0522 17:55:02.454251   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.454267   67740 retry.go:31] will retry after 10.684147ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.465455   67740 provision.go:84] configureAuth start
	I0522 17:55:02.465526   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.481723   67740 provision.go:87] duration metric: took 16.239661ms to configureAuth
	W0522 17:55:02.481741   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.481763   67740 retry.go:31] will retry after 18.313065ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.500964   67740 provision.go:84] configureAuth start
	I0522 17:55:02.501036   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.516727   67740 provision.go:87] duration metric: took 15.73571ms to configureAuth
	W0522 17:55:02.516744   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.516762   67740 retry.go:31] will retry after 38.484546ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.555967   67740 provision.go:84] configureAuth start
	I0522 17:55:02.556066   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.571765   67740 provision.go:87] duration metric: took 15.775996ms to configureAuth
	W0522 17:55:02.571791   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.571810   67740 retry.go:31] will retry after 39.432408ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.612013   67740 provision.go:84] configureAuth start
	I0522 17:55:02.612103   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.628447   67740 provision.go:87] duration metric: took 16.410627ms to configureAuth
	W0522 17:55:02.628466   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.628485   67740 retry.go:31] will retry after 33.551108ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.662675   67740 provision.go:84] configureAuth start
	I0522 17:55:02.662769   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.679445   67740 provision.go:87] duration metric: took 16.731972ms to configureAuth
	W0522 17:55:02.679464   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.679484   67740 retry.go:31] will retry after 81.05036ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.760653   67740 provision.go:84] configureAuth start
	I0522 17:55:02.760738   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:02.776954   67740 provision.go:87] duration metric: took 16.276016ms to configureAuth
	W0522 17:55:02.776979   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.776998   67740 retry.go:31] will retry after 214.543912ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:02.992409   67740 provision.go:84] configureAuth start
	I0522 17:55:02.992522   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.009801   67740 provision.go:87] duration metric: took 17.348572ms to configureAuth
	W0522 17:55:03.009828   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.009848   67740 retry.go:31] will retry after 147.68294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.158197   67740 provision.go:84] configureAuth start
	I0522 17:55:03.158288   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.174209   67740 provision.go:87] duration metric: took 15.985368ms to configureAuth
	W0522 17:55:03.174228   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.174245   67740 retry.go:31] will retry after 271.429453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.446454   67740 provision.go:84] configureAuth start
	I0522 17:55:03.446568   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:03.462755   67740 provision.go:87] duration metric: took 16.269029ms to configureAuth
	W0522 17:55:03.462775   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:03.462813   67740 retry.go:31] will retry after 640.121031ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.103329   67740 provision.go:84] configureAuth start
	I0522 17:55:04.103429   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.120167   67740 provision.go:87] duration metric: took 16.813953ms to configureAuth
	W0522 17:55:04.120188   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.120208   67740 retry.go:31] will retry after 602.013778ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.722980   67740 provision.go:84] configureAuth start
	I0522 17:55:04.723059   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:04.739287   67740 provision.go:87] duration metric: took 16.263112ms to configureAuth
	W0522 17:55:04.739308   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:04.739326   67740 retry.go:31] will retry after 1.341223625s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.081721   67740 provision.go:84] configureAuth start
	I0522 17:55:06.081836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:06.098304   67740 provision.go:87] duration metric: took 16.547011ms to configureAuth
	W0522 17:55:06.098322   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:06.098338   67740 retry.go:31] will retry after 2.170272382s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.269528   67740 provision.go:84] configureAuth start
	I0522 17:55:08.269635   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:08.285825   67740 provision.go:87] duration metric: took 16.2651ms to configureAuth
	W0522 17:55:08.285844   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:08.285861   67740 retry.go:31] will retry after 3.377189854s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.663807   67740 provision.go:84] configureAuth start
	I0522 17:55:11.663916   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:11.681079   67740 provision.go:87] duration metric: took 17.243701ms to configureAuth
	W0522 17:55:11.681112   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:11.681131   67740 retry.go:31] will retry after 2.766930623s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.448404   67740 provision.go:84] configureAuth start
	I0522 17:55:14.448485   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:14.465374   67740 provision.go:87] duration metric: took 16.943416ms to configureAuth
	W0522 17:55:14.465392   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:14.465408   67740 retry.go:31] will retry after 7.317834793s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.783808   67740 provision.go:84] configureAuth start
	I0522 17:55:21.783931   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:21.801618   67740 provision.go:87] duration metric: took 17.778585ms to configureAuth
	W0522 17:55:21.801637   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:21.801655   67740 retry.go:31] will retry after 5.749970452s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.552576   67740 provision.go:84] configureAuth start
	I0522 17:55:27.552676   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:27.569090   67740 provision.go:87] duration metric: took 16.487886ms to configureAuth
	W0522 17:55:27.569109   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:27.569126   67740 retry.go:31] will retry after 12.570280817s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.141724   67740 provision.go:84] configureAuth start
	I0522 17:55:40.141836   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:40.158702   67740 provision.go:87] duration metric: took 16.931082ms to configureAuth
	W0522 17:55:40.158723   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:40.158743   67740 retry.go:31] will retry after 13.696494034s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.856578   67740 provision.go:84] configureAuth start
	I0522 17:55:53.856693   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:55:53.873246   67740 provision.go:87] duration metric: took 16.620408ms to configureAuth
	W0522 17:55:53.873273   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:55:53.873290   67740 retry.go:31] will retry after 32.163778232s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.037485   67740 provision.go:84] configureAuth start
	I0522 17:56:26.037596   67740 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 17:56:26.054707   67740 provision.go:87] duration metric: took 17.19549ms to configureAuth
	W0522 17:56:26.054725   67740 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054742   67740 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:26.054750   67740 machine.go:97] duration metric: took 1m27.242886101s to provisionDockerMachine
	I0522 17:56:26.054758   67740 client.go:171] duration metric: took 1m32.575565656s to LocalClient.Create
	I0522 17:56:28.055434   67740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 17:56:28.055492   67740 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 17:56:28.072469   67740 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32797 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 17:56:28.155834   67740 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 17:56:28.159690   67740 start.go:128] duration metric: took 1m34.682513511s to createHost
	I0522 17:56:28.159711   67740 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m34.682658667s
	W0522 17:56:28.159799   67740 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 17:56:28.161597   67740 out.go:177] 
	W0522 17:56:28.162787   67740 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 17:56:28.162807   67740 out.go:239] * 
	W0522 17:56:28.163671   67740 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 17:56:28.165036   67740 out.go:177] 
	
	
	==> Docker <==
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:42 ha-828033 dockerd[1209]: 2024/05/22 18:08:42 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:43 ha-828033 dockerd[1209]: 2024/05/22 18:08:43 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:08:44 ha-828033 dockerd[1209]: 2024/05/22 18:08:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:11:26 ha-828033 dockerd[1209]: 2024/05/22 18:11:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   14 minutes ago      Running             busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         17 minutes ago      Running             coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              17 minutes ago      Running             kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         18 minutes ago      Running             storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	63f49aaadee91       cbb01a7bd410d                                                                                         18 minutes ago      Exited              coredns                   0                   91e8c76c71ae7       coredns-7db6d8ff4d-dxfhb
	dd5bd702646a4       cbb01a7bd410d                                                                                         18 minutes ago      Exited              coredns                   0                   8b3fd8cf48c95       coredns-7db6d8ff4d-gznzs
	faac4370a3326       747097150317f                                                                                         18 minutes ago      Running             kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     18 minutes ago      Running             kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         18 minutes ago      Running             kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	71559235c3028       91be940803172                                                                                         18 minutes ago      Running             kube-apiserver            0                   06f42956ef3cd       kube-apiserver-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         18 minutes ago      Running             etcd                      0                   ca6a020652c53       etcd-ha-828033
	dce56fa365a91       25a1387cdab82                                                                                         18 minutes ago      Running             kube-controller-manager   0                   5c61ca7a89838       kube-controller-manager-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	
	
	==> coredns [63f49aaadee9] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:45766 - 14716 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.028282739s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:49572 - 10170 "HINFO IN 3871236508495439397.4299498115821132756. udp 57 false 512" - - 0 5.000099963s
	[ERROR] plugin/errors: 2 3871236508495439397.4299498115821132756. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [dd5bd702646a] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:39647 - 60877 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000133195s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:48160 - 62410 "HINFO IN 7283636356468983640.375644074009443122. udp 56 false 512" - - 0 5.000085105s
	[ERROR] plugin/errors: 2 7283636356468983640.375644074009443122. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	
	
	==> describe nodes <==
	Name:               ha-828033
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-828033
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=ha-828033
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T17_53_12_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 17:53:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-828033
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:11:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:06:57 +0000   Wed, 22 May 2024 17:53:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-828033
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 ae91489c226b473f87d2128d6a868a8a
	  System UUID:                dcef1866-ae43-483c-a65a-94c2bd9ff7da
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-nhhq2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-7db6d8ff4d-dxfhb             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-gznzs             100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     18m
	  kube-system                 etcd-ha-828033                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-swzdx                        100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-828033             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-828033    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-fl69s                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-828033             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-828033                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%!)(MISSING)  100m (1%!)(MISSING)
	  memory             290Mi (0%!)(MISSING)  390Mi (1%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 18m   kube-proxy       
	  Normal  Starting                 18m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m   kubelet          Node ha-828033 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m   kubelet          Node ha-828033 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m   kubelet          Node ha-828033 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                18m   kubelet          Node ha-828033 status is now: NodeReady
	  Normal  RegisteredNode           18m   node-controller  Node ha-828033 event: Registered Node ha-828033 in Controller
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.05762Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T17:53:06.94516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945223Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.94525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 1"}
	{"level":"info","ts":"2024-05-22T17:53:06.945275Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945287Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.9453Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.945313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T17:53:06.946275Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.946897Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	
	
	==> kernel <==
	 18:11:28 up 53 min,  0 users,  load average: 0.19, 0.44, 0.47
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [71559235c302] <==
	I0522 17:53:09.143940       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 17:53:09.144006       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 17:53:09.147656       1 controller.go:615] quota admission added evaluator for: namespaces
	E0522 17:53:09.149131       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I0522 17:53:09.352351       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 17:53:10.000114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 17:53:10.003662       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 17:53:10.003677       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 17:53:10.401970       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 17:53:10.431664       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 17:53:10.564789       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 17:53:10.571710       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0522 17:53:10.572657       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 17:53:10.577337       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 17:53:11.057429       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 17:53:11.962630       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 17:53:11.972651       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 17:53:12.162218       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 17:53:25.167826       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 17:53:25.351601       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:08:28.087364       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38596: use of closed network connection
	E0522 18:08:28.455161       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38668: use of closed network connection
	E0522 18:08:28.801487       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38762: use of closed network connection
	E0522 18:08:30.845810       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38884: use of closed network connection
	E0522 18:08:30.987779       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:38902: use of closed network connection
	
	
	==> kube-controller-manager [dce56fa365a9] <==
	I0522 17:53:24.574805       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:24.615689       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 17:53:24.615718       1 shared_informer.go:320] Caches are synced for PV protection
	I0522 17:53:24.619960       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 17:53:25.032081       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143504       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 17:53:25.143551       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 17:53:25.467902       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.282115ms"
	I0522 17:53:25.473436       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.477354ms"
	I0522 17:53:25.473538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="62.839µs"
	I0522 17:53:25.480355       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="67.122µs"
	I0522 17:53:27.539450       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="87.754µs"
	I0522 17:53:27.563897       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="109.015µs"
	I0522 17:53:40.888251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.836µs"
	I0522 17:53:40.903676       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.828828ms"
	I0522 17:53:40.903798       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="72.954µs"
	I0522 17:53:40.911852       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.610804ms"
	I0522 17:53:40.911935       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.925µs"
	I0522 17:56:29.936227       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="17.795723ms"
	I0522 17:56:29.941333       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="5.048122ms"
	I0522 17:56:29.941415       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.205µs"
	I0522 17:56:29.947541       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.013µs"
	I0522 17:56:29.947653       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="27.698µs"
	I0522 17:56:32.929446       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.842071ms"
	I0522 17:56:32.929529       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.589µs"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [f457f32fdd43] <==
	W0522 17:53:09.146946       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:09.148467       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:09.146991       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.345921    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7lb\" (UniqueName: \"kubernetes.io/projected/7e82964e-040d-419c-969e-e89b79f50b09-kube-api-access-pp7lb\") pod \"kube-proxy-fl69s\" (UID: \"7e82964e-040d-419c-969e-e89b79f50b09\") " pod="kube-system/kube-proxy-fl69s"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.459345    2487 topology_manager.go:215] "Topology Admit Handler" podUID="cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.464987    2487 topology_manager.go:215] "Topology Admit Handler" podUID="8c59e7e8-7d36-4396-a2db-f715d854e654" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547332    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-config-volume\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547376    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc4sj\" (UniqueName: \"kubernetes.io/projected/cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8-kube-api-access-lc4sj\") pod \"coredns-7db6d8ff4d-dxfhb\" (UID: \"cbb5dcd0-bdba-4215-8eff-6c1ceae3d5c8\") " pod="kube-system/coredns-7db6d8ff4d-dxfhb"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547404    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c59e7e8-7d36-4396-a2db-f715d854e654-config-volume\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.547419    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9l2b\" (UniqueName: \"kubernetes.io/projected/8c59e7e8-7d36-4396-a2db-f715d854e654-kube-api-access-k9l2b\") pod \"coredns-7db6d8ff4d-gznzs\" (UID: \"8c59e7e8-7d36-4396-a2db-f715d854e654\") " pod="kube-system/coredns-7db6d8ff4d-gznzs"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.694888    2487 topology_manager.go:215] "Topology Admit Handler" podUID="f6689fce-b948-4db5-abc7-7fb25d6d4e1c" podNamespace="kube-system" podName="storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749414    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnzd7\" (UniqueName: \"kubernetes.io/projected/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-kube-api-access-tnzd7\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:25 ha-828033 kubelet[2487]: I0522 17:53:25.749504    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f6689fce-b948-4db5-abc7-7fb25d6d4e1c-tmp\") pod \"storage-provisioner\" (UID: \"f6689fce-b948-4db5-abc7-7fb25d6d4e1c\") " pod="kube-system/storage-provisioner"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.269960    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e7f0294d0abed48fa06f52203ec68a5d32aee5fba94994c174dd344eb382d"
	May 22 17:53:26 ha-828033 kubelet[2487]: I0522 17:53:26.289737    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fl69s" podStartSLOduration=1.289718387 podStartE2EDuration="1.289718387s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:26.287458677 +0000 UTC m=+14.397120092" watchObservedRunningTime="2024-05-22 17:53:26.289718387 +0000 UTC m=+14.399379799"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.523164    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.5231425830000003 podStartE2EDuration="2.523142583s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.522832045 +0000 UTC m=+15.632493457" watchObservedRunningTime="2024-05-22 17:53:27.523142583 +0000 UTC m=+15.632803998"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572618    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dxfhb" podStartSLOduration=2.572598329 podStartE2EDuration="2.572598329s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.572250374 +0000 UTC m=+15.681911787" watchObservedRunningTime="2024-05-22 17:53:27.572598329 +0000 UTC m=+15.682259742"
	May 22 17:53:27 ha-828033 kubelet[2487]: I0522 17:53:27.572757    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gznzs" podStartSLOduration=2.572749753 podStartE2EDuration="2.572749753s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 17:53:27.538817538 +0000 UTC m=+15.648478952" watchObservedRunningTime="2024-05-22 17:53:27.572749753 +0000 UTC m=+15.682411167"
	May 22 17:53:31 ha-828033 kubelet[2487]: I0522 17:53:31.646168    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-swzdx" podStartSLOduration=1.752288075 podStartE2EDuration="6.646145265s" podCreationTimestamp="2024-05-22 17:53:25 +0000 UTC" firstStartedPulling="2024-05-22 17:53:25.87920307 +0000 UTC m=+13.988864465" lastFinishedPulling="2024-05-22 17:53:30.773060244 +0000 UTC m=+18.882721655" observedRunningTime="2024-05-22 17:53:31.645395144 +0000 UTC m=+19.755056551" watchObservedRunningTime="2024-05-22 17:53:31.646145265 +0000 UTC m=+19.755806677"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.197898    2487 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 17:53:32 ha-828033 kubelet[2487]: I0522 17:53:32.199915    2487 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699762    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.699796    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859168    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690"
	May 22 17:53:39 ha-828033 kubelet[2487]: I0522 17:53:39.859188    2487 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323"
	May 22 17:56:29 ha-828033 kubelet[2487]: I0522 17:56:29.930203    2487 topology_manager.go:215] "Topology Admit Handler" podUID="325da933-1b75-4a77-8d6e-ce3d65967653" podNamespace="default" podName="busybox-fc5497c4f-nhhq2"
	May 22 17:56:30 ha-828033 kubelet[2487]: I0522 17:56:30.087916    2487 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmbwg\" (UniqueName: \"kubernetes.io/projected/325da933-1b75-4a77-8d6e-ce3d65967653-kube-api-access-gmbwg\") pod \"busybox-fc5497c4f-nhhq2\" (UID: \"325da933-1b75-4a77-8d6e-ce3d65967653\") " pod="default/busybox-fc5497c4f-nhhq2"
	May 22 17:56:32 ha-828033 kubelet[2487]: I0522 17:56:32.924504    2487 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-nhhq2" podStartSLOduration=2.29128387 podStartE2EDuration="3.924482615s" podCreationTimestamp="2024-05-22 17:56:29 +0000 UTC" firstStartedPulling="2024-05-22 17:56:30.402870182 +0000 UTC m=+198.512531578" lastFinishedPulling="2024-05-22 17:56:32.036068928 +0000 UTC m=+200.145730323" observedRunningTime="2024-05-22 17:56:32.924350856 +0000 UTC m=+201.034012270" watchObservedRunningTime="2024-05-22 17:56:32.924482615 +0000 UTC m=+201.034144028"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:261: (dbg) Run:  kubectl --context ha-828033 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9
helpers_test.go:282: (dbg) kubectl --context ha-828033 describe pod busybox-fc5497c4f-cw6wc busybox-fc5497c4f-x4bg9:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cw6wc
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-h5x42 (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-h5x42:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m46s (x3 over 15m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-x4bg9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-w7gmh (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-w7gmh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  4m46s (x3 over 15m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (215s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-828033 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-828033 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-828033 -v=7 --alsologtostderr: (11.798799298s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-828033 --wait=true -v=7 --alsologtostderr
E0522 18:11:55.309840   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:12:24.838562   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 18:13:18.355441   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
ha_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-828033 --wait=true -v=7 --alsologtostderr: exit status 80 (3m21.736076961s)

                                                
                                                
-- stdout --
	* [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "ha-828033" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	* Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "ha-828033-m02" ...
	* Updating the running docker "ha-828033-m02" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:11:41.134703   93587 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:41.134930   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.134938   93587 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:41.134942   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.135123   93587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:41.135663   93587 out.go:298] Setting JSON to false
	I0522 18:11:41.136597   93587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3245,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:11:41.136652   93587 start.go:139] virtualization: kvm guest
	I0522 18:11:41.138603   93587 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:11:41.139872   93587 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:11:41.139877   93587 notify.go:220] Checking for updates...
	I0522 18:11:41.141388   93587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:11:41.142594   93587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:41.143720   93587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:11:41.144893   93587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:11:41.145865   93587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:11:41.147279   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:41.147391   93587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:11:41.167202   93587 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:11:41.167354   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.213981   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.205284379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.214076   93587 docker.go:295] overlay module found
	I0522 18:11:41.216233   93587 out.go:177] * Using the docker driver based on existing profile
	I0522 18:11:41.217269   93587 start.go:297] selected driver: docker
	I0522 18:11:41.217284   93587 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.217363   93587 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:11:41.217435   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.262537   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.253560233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.263171   93587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:11:41.263204   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:41.263213   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:41.263260   93587 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.265782   93587 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:11:41.266790   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:11:41.267878   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:11:41.268972   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:41.268999   93587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:11:41.268994   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:11:41.269006   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:11:41.269151   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:11:41.269173   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:11:41.269261   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.283614   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:11:41.283635   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:11:41.283654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:11:41.283689   93587 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:11:41.283753   93587 start.go:364] duration metric: took 41.779µs to acquireMachinesLock for "ha-828033"
	I0522 18:11:41.283775   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:11:41.283786   93587 fix.go:54] fixHost starting: 
	I0522 18:11:41.283991   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.299535   93587 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:11:41.299560   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:11:41.301277   93587 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:11:41.302545   93587 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:11:41.550741   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.567723   93587 kic.go:430] container "ha-828033" state is running.
	I0522 18:11:41.568146   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:41.584785   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.585001   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:11:41.585061   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:41.601067   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:41.601257   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:41.601268   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:11:41.601940   93587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58778->127.0.0.1:32807: read: connection reset by peer
	I0522 18:11:44.714380   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.714404   93587 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:11:44.714459   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.731671   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.731883   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.731902   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:11:44.852943   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.853043   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.869576   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.869790   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.869817   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:11:44.979057   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:44.979089   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:11:44.979116   93587 ubuntu.go:177] setting up certificates
	I0522 18:11:44.979134   93587 provision.go:84] configureAuth start
	I0522 18:11:44.979199   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:44.994933   93587 provision.go:143] copyHostCerts
	I0522 18:11:44.994969   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995017   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:11:44.995033   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995108   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:11:44.995224   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995252   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:11:44.995259   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995322   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:11:44.995400   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995422   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:11:44.995429   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995474   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:11:44.995562   93587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:11:45.135697   93587 provision.go:177] copyRemoteCerts
	I0522 18:11:45.135763   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:11:45.135818   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.152921   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.238902   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:11:45.238973   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:11:45.258885   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:11:45.258948   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0522 18:11:45.278444   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:11:45.278494   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:11:45.297780   93587 provision.go:87] duration metric: took 318.629986ms to configureAuth
	I0522 18:11:45.297808   93587 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:11:45.297962   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:45.298004   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.313749   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.313923   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.313939   93587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:11:45.427468   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:11:45.427494   93587 ubuntu.go:71] root file system type: overlay
	I0522 18:11:45.427580   93587 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:11:45.427626   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.444225   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.444413   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.444506   93587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:11:45.564594   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:11:45.564669   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.580720   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.580903   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.580920   93587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:11:45.695828   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:45.695857   93587 machine.go:97] duration metric: took 4.110841908s to provisionDockerMachine
	I0522 18:11:45.695867   93587 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:11:45.695877   93587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:11:45.695924   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:11:45.695955   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.712232   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.795493   93587 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:11:45.798393   93587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:11:45.798434   93587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:11:45.798444   93587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:11:45.798453   93587 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:11:45.798471   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:11:45.798511   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:11:45.798590   93587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:11:45.798602   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:11:45.798690   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:11:45.806167   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:45.826168   93587 start.go:296] duration metric: took 130.28741ms for postStartSetup
	I0522 18:11:45.826240   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:45.826284   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.842515   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.923712   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:11:45.927618   93587 fix.go:56] duration metric: took 4.643832098s for fixHost
	I0522 18:11:45.927656   93587 start.go:83] releasing machines lock for "ha-828033", held for 4.643887227s
	I0522 18:11:45.927713   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:45.944156   93587 ssh_runner.go:195] Run: cat /version.json
	I0522 18:11:45.944201   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.944235   93587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:11:45.944288   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.962364   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.962780   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:46.042681   93587 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:46.109435   93587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:11:46.113688   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:11:46.129549   93587 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:11:46.129616   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:11:46.137374   93587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:11:46.138397   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.138424   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.138550   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.152035   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:11:46.160068   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:11:46.168623   93587 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.168674   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:11:46.177246   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.185321   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:11:46.193307   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.201602   93587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:11:46.209350   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:11:46.217593   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:11:46.225824   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:11:46.234419   93587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:11:46.241490   93587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:11:46.248503   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.323097   93587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:11:46.411392   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.411434   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.411494   93587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:11:46.422471   93587 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:11:46.422535   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:11:46.433407   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.449148   93587 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:11:46.452464   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:11:46.460126   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:11:46.477806   93587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:11:46.581019   93587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:11:46.682974   93587 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.683118   93587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:11:46.699398   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.783890   93587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:11:47.043450   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:11:47.053302   93587 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:11:47.063710   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.072923   93587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:11:47.142683   93587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:11:47.222920   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.298978   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:11:47.310891   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.320183   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.395538   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:11:47.457881   93587 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:11:47.457934   93587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:11:47.461279   93587 start.go:562] Will wait 60s for crictl version
	I0522 18:11:47.461343   93587 ssh_runner.go:195] Run: which crictl
	I0522 18:11:47.464606   93587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:11:47.495432   93587 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:11:47.495495   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.517256   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.541495   93587 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:11:47.541571   93587 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:11:47.557260   93587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:11:47.560496   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.570471   93587 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:11:47.570586   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:47.570631   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.587878   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.587899   93587 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:11:47.587950   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.606514   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.606541   93587 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:11:47.606558   93587 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:11:47.606687   93587 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:11:47.606735   93587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:11:47.652790   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:47.652807   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:47.652824   93587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:11:47.652857   93587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:11:47.652974   93587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:11:47.652992   93587 kube-vip.go:115] generating kube-vip config ...
	I0522 18:11:47.653024   93587 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:11:47.663570   93587 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:47.663661   93587 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:11:47.663702   93587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:11:47.671164   93587 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:11:47.671218   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:11:47.678433   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:11:47.693280   93587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:11:47.707810   93587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:11:47.722391   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:11:47.737026   93587 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:11:47.739845   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.748775   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.823891   93587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:11:47.835577   93587 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:11:47.835598   93587 certs.go:194] generating shared ca certs ...
	I0522 18:11:47.835613   93587 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:47.835758   93587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:11:47.835842   93587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:11:47.835862   93587 certs.go:256] generating profile certs ...
	I0522 18:11:47.835960   93587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:11:47.835985   93587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:11:47.836008   93587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:11:48.121096   93587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:11:48.121121   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121275   93587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:11:48.121287   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121352   93587 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:11:48.121491   93587 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:11:48.121607   93587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:11:48.121622   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:11:48.121634   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:11:48.121647   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:11:48.121659   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:11:48.121671   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:11:48.121684   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:11:48.121695   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:11:48.121706   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:11:48.121761   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:11:48.121786   93587 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:11:48.121796   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:11:48.121824   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:11:48.121846   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:11:48.121868   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:11:48.121906   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:48.121932   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.121947   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.121963   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.122488   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:11:48.143159   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:11:48.162787   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:11:48.182506   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:11:48.201936   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:11:48.221464   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:11:48.240723   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:11:48.260323   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:11:48.279765   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:11:48.299293   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:11:48.318925   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:11:48.338728   93587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:11:48.353309   93587 ssh_runner.go:195] Run: openssl version
	I0522 18:11:48.358049   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:11:48.365885   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368779   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368829   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.374835   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:11:48.382122   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:11:48.389749   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392543   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392586   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.400682   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:11:48.407800   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:11:48.415568   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418291   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418342   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.424132   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:11:48.431192   93587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:11:48.433941   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:11:48.439661   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:11:48.445338   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:11:48.451065   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:11:48.456627   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:11:48.461988   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:11:48.467384   93587 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:48.467494   93587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:11:48.485081   93587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:11:48.492968   93587 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:11:48.492987   93587 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:11:48.492994   93587 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:11:48.493030   93587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:11:48.500158   93587 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:48.500524   93587 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.500622   93587 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:11:48.500860   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.501224   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.501415   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.501829   93587 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:11:48.502116   93587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:11:48.509165   93587 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:11:48.509192   93587 kubeadm.go:591] duration metric: took 16.193394ms to restartPrimaryControlPlane
	I0522 18:11:48.509203   93587 kubeadm.go:393] duration metric: took 41.824441ms to StartCluster
	I0522 18:11:48.509229   93587 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.509281   93587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.509984   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.510194   93587 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:11:48.510219   93587 start.go:240] waiting for startup goroutines ...
	I0522 18:11:48.510231   93587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:11:48.510288   93587 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:11:48.510308   93587 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:11:48.510350   93587 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 18:11:48.510358   93587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	W0522 18:11:48.510362   93587 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:11:48.510372   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:48.510392   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.510671   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.510833   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.531981   93587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:11:48.529656   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.532267   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.533374   93587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.533470   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:11:48.533514   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.533609   93587 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:11:48.533626   93587 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:11:48.533656   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.533986   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.549936   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.550918   93587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:11:48.550941   93587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:11:48.550989   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.567412   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.643338   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.658967   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.695623   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.695654   93587 retry.go:31] will retry after 143.566199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:11:48.710095   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.710122   93587 retry.go:31] will retry after 196.09206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.839382   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:48.889703   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.889737   93587 retry.go:31] will retry after 405.6758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.906883   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.957678   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.957706   93587 retry.go:31] will retry after 481.984617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.296239   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.346745   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.346776   93587 retry.go:31] will retry after 298.316645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.439941   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.490892   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.490924   93587 retry.go:31] will retry after 365.174941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.646180   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.695995   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.696026   93587 retry.go:31] will retry after 622.662088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.856274   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.908213   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.908240   93587 retry.go:31] will retry after 465.598462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.319768   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:50.370352   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.370393   93587 retry.go:31] will retry after 1.153542566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.374493   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:50.427342   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.427370   93587 retry.go:31] will retry after 1.760070779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.524500   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:51.576096   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.576127   93587 retry.go:31] will retry after 1.395298614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.187677   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:52.238330   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.238363   93587 retry.go:31] will retry after 2.838643955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.972468   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:53.024864   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:53.024894   93587 retry.go:31] will retry after 3.988192679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.078985   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:55.254504   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.254547   93587 retry.go:31] will retry after 1.898473733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.013394   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:57.065110   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.065143   93587 retry.go:31] will retry after 3.026639765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.153313   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:57.205183   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.205216   93587 retry.go:31] will retry after 4.512900176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.093267   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:00.144874   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.144907   93587 retry.go:31] will retry after 4.624822439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.718976   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:01.770260   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.770289   93587 retry.go:31] will retry after 6.597322484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.770613   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:04.821736   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.821765   93587 retry.go:31] will retry after 6.276558674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.369690   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:08.421665   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.421695   93587 retry.go:31] will retry after 4.88361876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.099397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:11.150176   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.150214   93587 retry.go:31] will retry after 14.618513106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.307405   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:13.358292   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.358325   93587 retry.go:31] will retry after 11.702428572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.064329   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:25.116230   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.116269   93587 retry.go:31] will retry after 20.635119238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.768934   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:25.819335   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.819366   93587 retry.go:31] will retry after 22.551209597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.755397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:45.807295   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.807335   93587 retry.go:31] will retry after 48.223563966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.371303   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:48.422526   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.422554   93587 retry.go:31] will retry after 21.925283254s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:13:10.348911   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:13:10.401430   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:10.401550   93587 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.031408   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:13:34.084103   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:34.084199   93587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.087605   93587 out.go:177] * Enabled addons: 
	I0522 18:13:34.092136   93587 addons.go:505] duration metric: took 1m45.581904576s for enable addons: enabled=[]
	I0522 18:13:34.092168   93587 start.go:245] waiting for cluster config update ...
	I0522 18:13:34.092175   93587 start.go:254] writing updated cluster config ...
	I0522 18:13:34.093767   93587 out.go:177] 
	I0522 18:13:34.094950   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:13:34.095010   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.096476   93587 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:13:34.097816   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:13:34.098828   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:13:34.099818   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:13:34.099834   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:13:34.099879   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:13:34.099916   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:13:34.099930   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:13:34.100028   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.116605   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:13:34.116636   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:13:34.116654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:13:34.116685   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:34.116739   93587 start.go:364] duration metric: took 36.742µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:34.116754   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:34.116759   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:34.116975   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.131815   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:13:34.131835   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:34.133519   93587 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:13:34.134577   93587 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:13:34.386505   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.403758   93587 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:13:34.404176   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:34.421199   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:13:34.421255   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:34.437668   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:13:34.438642   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.438697   93587 retry.go:31] will retry after 159.621723ms: ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	W0522 18:13:34.599398   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.599427   93587 retry.go:31] will retry after 217.688969ms: ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.948280   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:13:34.952728   93587 fix.go:56] duration metric: took 835.959949ms for fixHost
	I0522 18:13:34.952759   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 836.005567ms
	W0522 18:13:34.952776   93587 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:13:34.952870   93587 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:13:34.952882   93587 start.go:728] Will try again in 5 seconds ...
	I0522 18:13:39.953931   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:39.954069   93587 start.go:364] duration metric: took 66.237µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:39.954098   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:39.954106   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:39.954430   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:39.971326   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:13:39.971351   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:39.973352   93587 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:13:39.974806   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:13:39.974895   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:39.990164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:39.990366   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:39.990382   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:13:40.106411   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.106441   93587 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:13:40.106497   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.123164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.123396   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.123412   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:13:40.245387   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.245458   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.262355   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.262539   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.262563   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:13:40.375115   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:13:40.375140   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:13:40.375156   93587 ubuntu.go:177] setting up certificates
	I0522 18:13:40.375167   93587 provision.go:84] configureAuth start
	I0522 18:13:40.375212   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.390878   93587 provision.go:87] duration metric: took 15.702592ms to configureAuth
	W0522 18:13:40.390903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.390928   93587 retry.go:31] will retry after 70.356µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.392042   93587 provision.go:84] configureAuth start
	I0522 18:13:40.392097   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.408007   93587 provision.go:87] duration metric: took 15.947883ms to configureAuth
	W0522 18:13:40.408024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.408044   93587 retry.go:31] will retry after 137.47µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.409151   93587 provision.go:84] configureAuth start
	I0522 18:13:40.409201   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.423891   93587 provision.go:87] duration metric: took 14.725235ms to configureAuth
	W0522 18:13:40.423909   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.423925   93587 retry.go:31] will retry after 262.374µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.425034   93587 provision.go:84] configureAuth start
	I0522 18:13:40.425086   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.439293   93587 provision.go:87] duration metric: took 14.241319ms to configureAuth
	W0522 18:13:40.439314   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.439330   93587 retry.go:31] will retry after 298.899µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.440439   93587 provision.go:84] configureAuth start
	I0522 18:13:40.440498   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.455314   93587 provision.go:87] duration metric: took 14.857395ms to configureAuth
	W0522 18:13:40.455331   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.455346   93587 retry.go:31] will retry after 425.458µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.456456   93587 provision.go:84] configureAuth start
	I0522 18:13:40.456517   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.473826   93587 provision.go:87] duration metric: took 17.346003ms to configureAuth
	W0522 18:13:40.473848   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.473864   93587 retry.go:31] will retry after 794.432µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.474977   93587 provision.go:84] configureAuth start
	I0522 18:13:40.475045   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.491066   93587 provision.go:87] duration metric: took 16.070525ms to configureAuth
	W0522 18:13:40.491088   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.491107   93587 retry.go:31] will retry after 1.614344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.493281   93587 provision.go:84] configureAuth start
	I0522 18:13:40.493345   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.508551   93587 provision.go:87] duration metric: took 15.254686ms to configureAuth
	W0522 18:13:40.508569   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.508587   93587 retry.go:31] will retry after 998.104µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.509712   93587 provision.go:84] configureAuth start
	I0522 18:13:40.509790   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.525006   93587 provision.go:87] duration metric: took 15.263842ms to configureAuth
	W0522 18:13:40.525024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.525042   93587 retry.go:31] will retry after 3.338034ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.529222   93587 provision.go:84] configureAuth start
	I0522 18:13:40.529282   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.544880   93587 provision.go:87] duration metric: took 15.639211ms to configureAuth
	W0522 18:13:40.544898   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.544922   93587 retry.go:31] will retry after 3.40783ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.549101   93587 provision.go:84] configureAuth start
	I0522 18:13:40.549153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.564670   93587 provision.go:87] duration metric: took 15.552453ms to configureAuth
	W0522 18:13:40.564691   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.564707   93587 retry.go:31] will retry after 7.302355ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.572891   93587 provision.go:84] configureAuth start
	I0522 18:13:40.572957   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.588884   93587 provision.go:87] duration metric: took 15.972307ms to configureAuth
	W0522 18:13:40.588903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.588921   93587 retry.go:31] will retry after 5.301531ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.595100   93587 provision.go:84] configureAuth start
	I0522 18:13:40.595153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.610191   93587 provision.go:87] duration metric: took 15.074227ms to configureAuth
	W0522 18:13:40.610211   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.610230   93587 retry.go:31] will retry after 11.026949ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.621370   93587 provision.go:84] configureAuth start
	I0522 18:13:40.621446   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.636327   93587 provision.go:87] duration metric: took 14.934708ms to configureAuth
	W0522 18:13:40.636340   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.636356   93587 retry.go:31] will retry after 25.960513ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.662569   93587 provision.go:84] configureAuth start
	I0522 18:13:40.662637   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.677809   93587 provision.go:87] duration metric: took 15.220921ms to configureAuth
	W0522 18:13:40.677824   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.677840   93587 retry.go:31] will retry after 32.75774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.711021   93587 provision.go:84] configureAuth start
	I0522 18:13:40.711093   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.726493   93587 provision.go:87] duration metric: took 15.45214ms to configureAuth
	W0522 18:13:40.726508   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.726524   93587 retry.go:31] will retry after 36.849589ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.763725   93587 provision.go:84] configureAuth start
	I0522 18:13:40.763797   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.779769   93587 provision.go:87] duration metric: took 16.019178ms to configureAuth
	W0522 18:13:40.779786   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.779806   93587 retry.go:31] will retry after 56.725665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.837004   93587 provision.go:84] configureAuth start
	I0522 18:13:40.837114   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.852417   93587 provision.go:87] duration metric: took 15.386685ms to configureAuth
	W0522 18:13:40.852435   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.852451   93587 retry.go:31] will retry after 111.712266ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.964732   93587 provision.go:84] configureAuth start
	I0522 18:13:40.964841   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.981335   93587 provision.go:87] duration metric: took 16.561934ms to configureAuth
	W0522 18:13:40.981354   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.981372   93587 retry.go:31] will retry after 119.589549ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.101655   93587 provision.go:84] configureAuth start
	I0522 18:13:41.101767   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.118304   93587 provision.go:87] duration metric: took 16.624114ms to configureAuth
	W0522 18:13:41.118332   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.118349   93587 retry.go:31] will retry after 172.20415ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.290646   93587 provision.go:84] configureAuth start
	I0522 18:13:41.290734   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.306781   93587 provision.go:87] duration metric: took 16.099389ms to configureAuth
	W0522 18:13:41.306799   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.306815   93587 retry.go:31] will retry after 467.479675ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.774386   93587 provision.go:84] configureAuth start
	I0522 18:13:41.774495   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.790035   93587 provision.go:87] duration metric: took 15.610421ms to configureAuth
	W0522 18:13:41.790054   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.790070   93587 retry.go:31] will retry after 663.257318ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.453817   93587 provision.go:84] configureAuth start
	I0522 18:13:42.453935   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.473961   93587 provision.go:87] duration metric: took 20.113537ms to configureAuth
	W0522 18:13:42.473982   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.473999   93587 retry.go:31] will retry after 453.336791ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.928400   93587 provision.go:84] configureAuth start
	I0522 18:13:42.928480   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.944835   93587 provision.go:87] duration metric: took 16.404983ms to configureAuth
	W0522 18:13:42.944858   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.944874   93587 retry.go:31] will retry after 1.661774658s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.607615   93587 provision.go:84] configureAuth start
	I0522 18:13:44.607723   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:44.623466   93587 provision.go:87] duration metric: took 15.817599ms to configureAuth
	W0522 18:13:44.623490   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.623506   93587 retry.go:31] will retry after 2.087899686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.711969   93587 provision.go:84] configureAuth start
	I0522 18:13:46.712058   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:46.728600   93587 provision.go:87] duration metric: took 16.596208ms to configureAuth
	W0522 18:13:46.728620   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.728636   93587 retry.go:31] will retry after 1.751255493s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.480034   93587 provision.go:84] configureAuth start
	I0522 18:13:48.480138   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:48.495909   93587 provision.go:87] duration metric: took 15.845589ms to configureAuth
	W0522 18:13:48.495927   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.495944   93587 retry.go:31] will retry after 3.216449309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.712476   93587 provision.go:84] configureAuth start
	I0522 18:13:51.712600   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:51.728675   93587 provision.go:87] duration metric: took 16.149731ms to configureAuth
	W0522 18:13:51.728694   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.728713   93587 retry.go:31] will retry after 4.442037166s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.171311   93587 provision.go:84] configureAuth start
	I0522 18:13:56.171390   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:56.188514   93587 provision.go:87] duration metric: took 17.174931ms to configureAuth
	W0522 18:13:56.188532   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.188548   93587 retry.go:31] will retry after 12.471520302s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.660614   93587 provision.go:84] configureAuth start
	I0522 18:14:08.660710   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:08.677166   93587 provision.go:87] duration metric: took 16.519042ms to configureAuth
	W0522 18:14:08.677185   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.677201   93587 retry.go:31] will retry after 10.952874884s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.630561   93587 provision.go:84] configureAuth start
	I0522 18:14:19.630655   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:19.646798   93587 provision.go:87] duration metric: took 16.206763ms to configureAuth
	W0522 18:14:19.646816   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.646833   93587 retry.go:31] will retry after 24.173560862s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.822465   93587 provision.go:84] configureAuth start
	I0522 18:14:43.822544   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:43.838993   93587 provision.go:87] duration metric: took 16.502247ms to configureAuth
	W0522 18:14:43.839013   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.839034   93587 retry.go:31] will retry after 18.866878171s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.707256   93587 provision.go:84] configureAuth start
	I0522 18:15:02.707363   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:15:02.723837   93587 provision.go:87] duration metric: took 16.544569ms to configureAuth
	W0522 18:15:02.723855   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723871   93587 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723880   93587 machine.go:97] duration metric: took 1m22.749059211s to provisionDockerMachine
	I0522 18:15:02.723935   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:02.723966   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:15:02.739583   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:15:02.819663   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:02.823565   93587 fix.go:56] duration metric: took 1m22.869456878s for fixHost
	I0522 18:15:02.823585   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m22.869501248s
	W0522 18:15:02.823659   93587 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.826139   93587 out.go:177] 
	W0522 18:15:02.827395   93587 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:15:02.827414   93587 out.go:239] * 
	* 
	W0522 18:15:02.828270   93587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:15:02.829647   93587 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:469: failed to run minikube start. args "out/minikube-linux-amd64 node list -p ha-828033 -v=7 --alsologtostderr" : exit status 80
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-828033
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:11:41.545152719Z",
	            "FinishedAt": "2024-05-22T18:11:40.848620795Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d0c98ea9130cbb800c462fe8803bee586edca8539288200e46ac88b3b024b2",
	            "SandboxKey": "/var/run/docker/netns/30d0c98ea913",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "2ba8cb80c659667fb6bda12680449f8c1464b6ce638e2e5d144c21ea7f6d07eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (237.042584ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-cw6wc                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:11:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:11:41.134703   93587 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:41.134930   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.134938   93587 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:41.134942   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.135123   93587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:41.135663   93587 out.go:298] Setting JSON to false
	I0522 18:11:41.136597   93587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3245,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:11:41.136652   93587 start.go:139] virtualization: kvm guest
	I0522 18:11:41.138603   93587 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:11:41.139872   93587 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:11:41.139877   93587 notify.go:220] Checking for updates...
	I0522 18:11:41.141388   93587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:11:41.142594   93587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:41.143720   93587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:11:41.144893   93587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:11:41.145865   93587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:11:41.147279   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:41.147391   93587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:11:41.167202   93587 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:11:41.167354   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.213981   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.205284379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.214076   93587 docker.go:295] overlay module found
	I0522 18:11:41.216233   93587 out.go:177] * Using the docker driver based on existing profile
	I0522 18:11:41.217269   93587 start.go:297] selected driver: docker
	I0522 18:11:41.217284   93587 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.217363   93587 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:11:41.217435   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.262537   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.253560233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.263171   93587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:11:41.263204   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:41.263213   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:41.263260   93587 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.265782   93587 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:11:41.266790   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:11:41.267878   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:11:41.268972   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:41.268999   93587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:11:41.268994   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:11:41.269006   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:11:41.269151   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:11:41.269173   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:11:41.269261   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.283614   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:11:41.283635   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:11:41.283654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:11:41.283689   93587 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:11:41.283753   93587 start.go:364] duration metric: took 41.779µs to acquireMachinesLock for "ha-828033"
	I0522 18:11:41.283775   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:11:41.283786   93587 fix.go:54] fixHost starting: 
	I0522 18:11:41.283991   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.299535   93587 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:11:41.299560   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:11:41.301277   93587 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:11:41.302545   93587 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:11:41.550741   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.567723   93587 kic.go:430] container "ha-828033" state is running.
	I0522 18:11:41.568146   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:41.584785   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.585001   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:11:41.585061   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:41.601067   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:41.601257   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:41.601268   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:11:41.601940   93587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58778->127.0.0.1:32807: read: connection reset by peer
	I0522 18:11:44.714380   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.714404   93587 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:11:44.714459   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.731671   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.731883   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.731902   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:11:44.852943   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.853043   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.869576   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.869790   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.869817   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:11:44.979057   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:44.979089   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:11:44.979116   93587 ubuntu.go:177] setting up certificates
	I0522 18:11:44.979134   93587 provision.go:84] configureAuth start
	I0522 18:11:44.979199   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:44.994933   93587 provision.go:143] copyHostCerts
	I0522 18:11:44.994969   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995017   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:11:44.995033   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995108   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:11:44.995224   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995252   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:11:44.995259   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995322   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:11:44.995400   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995422   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:11:44.995429   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995474   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:11:44.995562   93587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:11:45.135697   93587 provision.go:177] copyRemoteCerts
	I0522 18:11:45.135763   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:11:45.135818   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.152921   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.238902   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:11:45.238973   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:11:45.258885   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:11:45.258948   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0522 18:11:45.278444   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:11:45.278494   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:11:45.297780   93587 provision.go:87] duration metric: took 318.629986ms to configureAuth
	I0522 18:11:45.297808   93587 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:11:45.297962   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:45.298004   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.313749   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.313923   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.313939   93587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:11:45.427468   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:11:45.427494   93587 ubuntu.go:71] root file system type: overlay
	I0522 18:11:45.427580   93587 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:11:45.427626   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.444225   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.444413   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.444506   93587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:11:45.564594   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:11:45.564669   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.580720   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.580903   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.580920   93587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:11:45.695828   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:45.695857   93587 machine.go:97] duration metric: took 4.110841908s to provisionDockerMachine
	I0522 18:11:45.695867   93587 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:11:45.695877   93587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:11:45.695924   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:11:45.695955   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.712232   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.795493   93587 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:11:45.798393   93587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:11:45.798434   93587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:11:45.798444   93587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:11:45.798453   93587 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:11:45.798471   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:11:45.798511   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:11:45.798590   93587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:11:45.798602   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:11:45.798690   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:11:45.806167   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:45.826168   93587 start.go:296] duration metric: took 130.28741ms for postStartSetup
	I0522 18:11:45.826240   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:45.826284   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.842515   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.923712   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:11:45.927618   93587 fix.go:56] duration metric: took 4.643832098s for fixHost
	I0522 18:11:45.927656   93587 start.go:83] releasing machines lock for "ha-828033", held for 4.643887227s
	I0522 18:11:45.927713   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:45.944156   93587 ssh_runner.go:195] Run: cat /version.json
	I0522 18:11:45.944201   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.944235   93587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:11:45.944288   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.962364   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.962780   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:46.042681   93587 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:46.109435   93587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:11:46.113688   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:11:46.129549   93587 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:11:46.129616   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:11:46.137374   93587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:11:46.138397   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.138424   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.138550   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.152035   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:11:46.160068   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:11:46.168623   93587 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.168674   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:11:46.177246   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.185321   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:11:46.193307   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.201602   93587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:11:46.209350   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:11:46.217593   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:11:46.225824   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:11:46.234419   93587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:11:46.241490   93587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:11:46.248503   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.323097   93587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:11:46.411392   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.411434   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.411494   93587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:11:46.422471   93587 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:11:46.422535   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:11:46.433407   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.449148   93587 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:11:46.452464   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:11:46.460126   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:11:46.477806   93587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:11:46.581019   93587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:11:46.682974   93587 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.683118   93587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:11:46.699398   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.783890   93587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:11:47.043450   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:11:47.053302   93587 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:11:47.063710   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.072923   93587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:11:47.142683   93587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:11:47.222920   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.298978   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:11:47.310891   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.320183   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.395538   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:11:47.457881   93587 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:11:47.457934   93587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:11:47.461279   93587 start.go:562] Will wait 60s for crictl version
	I0522 18:11:47.461343   93587 ssh_runner.go:195] Run: which crictl
	I0522 18:11:47.464606   93587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:11:47.495432   93587 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:11:47.495495   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.517256   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.541495   93587 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:11:47.541571   93587 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:11:47.557260   93587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:11:47.560496   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.570471   93587 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:11:47.570586   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:47.570631   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.587878   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.587899   93587 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:11:47.587950   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.606514   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.606541   93587 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:11:47.606558   93587 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:11:47.606687   93587 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:11:47.606735   93587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:11:47.652790   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:47.652807   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:47.652824   93587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:11:47.652857   93587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:11:47.652974   93587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:11:47.652992   93587 kube-vip.go:115] generating kube-vip config ...
	I0522 18:11:47.653024   93587 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:11:47.663570   93587 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:47.663661   93587 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:11:47.663702   93587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:11:47.671164   93587 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:11:47.671218   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:11:47.678433   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:11:47.693280   93587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:11:47.707810   93587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:11:47.722391   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:11:47.737026   93587 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:11:47.739845   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.748775   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.823891   93587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:11:47.835577   93587 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:11:47.835598   93587 certs.go:194] generating shared ca certs ...
	I0522 18:11:47.835613   93587 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:47.835758   93587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:11:47.835842   93587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:11:47.835862   93587 certs.go:256] generating profile certs ...
	I0522 18:11:47.835960   93587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:11:47.835985   93587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:11:47.836008   93587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:11:48.121096   93587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:11:48.121121   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121275   93587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:11:48.121287   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121352   93587 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:11:48.121491   93587 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:11:48.121607   93587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:11:48.121622   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:11:48.121634   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:11:48.121647   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:11:48.121659   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:11:48.121671   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:11:48.121684   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:11:48.121695   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:11:48.121706   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:11:48.121761   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:11:48.121786   93587 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:11:48.121796   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:11:48.121824   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:11:48.121846   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:11:48.121868   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:11:48.121906   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:48.121932   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.121947   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.121963   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.122488   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:11:48.143159   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:11:48.162787   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:11:48.182506   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:11:48.201936   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:11:48.221464   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:11:48.240723   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:11:48.260323   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:11:48.279765   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:11:48.299293   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:11:48.318925   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:11:48.338728   93587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:11:48.353309   93587 ssh_runner.go:195] Run: openssl version
	I0522 18:11:48.358049   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:11:48.365885   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368779   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368829   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.374835   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:11:48.382122   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:11:48.389749   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392543   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392586   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.400682   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:11:48.407800   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:11:48.415568   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418291   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418342   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.424132   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:11:48.431192   93587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:11:48.433941   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:11:48.439661   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:11:48.445338   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:11:48.451065   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:11:48.456627   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:11:48.461988   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:11:48.467384   93587 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:48.467494   93587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:11:48.485081   93587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:11:48.492968   93587 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:11:48.492987   93587 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:11:48.492994   93587 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:11:48.493030   93587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:11:48.500158   93587 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:48.500524   93587 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.500622   93587 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:11:48.500860   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.501224   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.501415   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.501829   93587 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:11:48.502116   93587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:11:48.509165   93587 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:11:48.509192   93587 kubeadm.go:591] duration metric: took 16.193394ms to restartPrimaryControlPlane
	I0522 18:11:48.509203   93587 kubeadm.go:393] duration metric: took 41.824441ms to StartCluster
	I0522 18:11:48.509229   93587 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.509281   93587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.509984   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.510194   93587 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:11:48.510219   93587 start.go:240] waiting for startup goroutines ...
	I0522 18:11:48.510231   93587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:11:48.510288   93587 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:11:48.510308   93587 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:11:48.510350   93587 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 18:11:48.510358   93587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	W0522 18:11:48.510362   93587 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:11:48.510372   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:48.510392   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.510671   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.510833   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.531981   93587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:11:48.529656   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.532267   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.533374   93587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.533470   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:11:48.533514   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.533609   93587 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:11:48.533626   93587 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:11:48.533656   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.533986   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.549936   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.550918   93587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:11:48.550941   93587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:11:48.550989   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.567412   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.643338   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.658967   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.695623   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.695654   93587 retry.go:31] will retry after 143.566199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:11:48.710095   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.710122   93587 retry.go:31] will retry after 196.09206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.839382   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:48.889703   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.889737   93587 retry.go:31] will retry after 405.6758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.906883   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.957678   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.957706   93587 retry.go:31] will retry after 481.984617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.296239   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.346745   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.346776   93587 retry.go:31] will retry after 298.316645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.439941   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.490892   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.490924   93587 retry.go:31] will retry after 365.174941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.646180   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.695995   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.696026   93587 retry.go:31] will retry after 622.662088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.856274   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.908213   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.908240   93587 retry.go:31] will retry after 465.598462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.319768   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:50.370352   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.370393   93587 retry.go:31] will retry after 1.153542566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.374493   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:50.427342   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.427370   93587 retry.go:31] will retry after 1.760070779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.524500   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:51.576096   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.576127   93587 retry.go:31] will retry after 1.395298614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.187677   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:52.238330   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.238363   93587 retry.go:31] will retry after 2.838643955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.972468   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:53.024864   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:53.024894   93587 retry.go:31] will retry after 3.988192679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.078985   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:55.254504   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.254547   93587 retry.go:31] will retry after 1.898473733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.013394   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:57.065110   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.065143   93587 retry.go:31] will retry after 3.026639765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.153313   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:57.205183   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.205216   93587 retry.go:31] will retry after 4.512900176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.093267   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:00.144874   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.144907   93587 retry.go:31] will retry after 4.624822439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.718976   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:01.770260   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.770289   93587 retry.go:31] will retry after 6.597322484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.770613   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:04.821736   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.821765   93587 retry.go:31] will retry after 6.276558674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.369690   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:08.421665   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.421695   93587 retry.go:31] will retry after 4.88361876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.099397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:11.150176   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.150214   93587 retry.go:31] will retry after 14.618513106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.307405   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:13.358292   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.358325   93587 retry.go:31] will retry after 11.702428572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.064329   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:25.116230   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.116269   93587 retry.go:31] will retry after 20.635119238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.768934   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:25.819335   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.819366   93587 retry.go:31] will retry after 22.551209597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.755397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:45.807295   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.807335   93587 retry.go:31] will retry after 48.223563966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.371303   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:48.422526   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.422554   93587 retry.go:31] will retry after 21.925283254s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:13:10.348911   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:13:10.401430   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:10.401550   93587 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.031408   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:13:34.084103   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:34.084199   93587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.087605   93587 out.go:177] * Enabled addons: 
	I0522 18:13:34.092136   93587 addons.go:505] duration metric: took 1m45.581904576s for enable addons: enabled=[]
	I0522 18:13:34.092168   93587 start.go:245] waiting for cluster config update ...
	I0522 18:13:34.092175   93587 start.go:254] writing updated cluster config ...
	I0522 18:13:34.093767   93587 out.go:177] 
	I0522 18:13:34.094950   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:13:34.095010   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.096476   93587 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:13:34.097816   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:13:34.098828   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:13:34.099818   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:13:34.099834   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:13:34.099879   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:13:34.099916   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:13:34.099930   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:13:34.100028   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.116605   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:13:34.116636   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:13:34.116654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:13:34.116685   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:34.116739   93587 start.go:364] duration metric: took 36.742µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:34.116754   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:34.116759   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:34.116975   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.131815   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:13:34.131835   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:34.133519   93587 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:13:34.134577   93587 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:13:34.386505   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.403758   93587 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:13:34.404176   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:34.421199   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:13:34.421255   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:34.437668   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:13:34.438642   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.438697   93587 retry.go:31] will retry after 159.621723ms: ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	W0522 18:13:34.599398   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.599427   93587 retry.go:31] will retry after 217.688969ms: ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.948280   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:13:34.952728   93587 fix.go:56] duration metric: took 835.959949ms for fixHost
	I0522 18:13:34.952759   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 836.005567ms
	W0522 18:13:34.952776   93587 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:13:34.952870   93587 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:13:34.952882   93587 start.go:728] Will try again in 5 seconds ...
	I0522 18:13:39.953931   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:39.954069   93587 start.go:364] duration metric: took 66.237µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:39.954098   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:39.954106   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:39.954430   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:39.971326   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:13:39.971351   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:39.973352   93587 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:13:39.974806   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:13:39.974895   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:39.990164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:39.990366   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:39.990382   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:13:40.106411   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.106441   93587 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:13:40.106497   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.123164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.123396   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.123412   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:13:40.245387   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.245458   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.262355   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.262539   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.262563   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:13:40.375115   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:13:40.375140   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:13:40.375156   93587 ubuntu.go:177] setting up certificates
	I0522 18:13:40.375167   93587 provision.go:84] configureAuth start
	I0522 18:13:40.375212   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.390878   93587 provision.go:87] duration metric: took 15.702592ms to configureAuth
	W0522 18:13:40.390903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.390928   93587 retry.go:31] will retry after 70.356µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.392042   93587 provision.go:84] configureAuth start
	I0522 18:13:40.392097   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.408007   93587 provision.go:87] duration metric: took 15.947883ms to configureAuth
	W0522 18:13:40.408024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.408044   93587 retry.go:31] will retry after 137.47µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.409151   93587 provision.go:84] configureAuth start
	I0522 18:13:40.409201   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.423891   93587 provision.go:87] duration metric: took 14.725235ms to configureAuth
	W0522 18:13:40.423909   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.423925   93587 retry.go:31] will retry after 262.374µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.425034   93587 provision.go:84] configureAuth start
	I0522 18:13:40.425086   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.439293   93587 provision.go:87] duration metric: took 14.241319ms to configureAuth
	W0522 18:13:40.439314   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.439330   93587 retry.go:31] will retry after 298.899µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.440439   93587 provision.go:84] configureAuth start
	I0522 18:13:40.440498   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.455314   93587 provision.go:87] duration metric: took 14.857395ms to configureAuth
	W0522 18:13:40.455331   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.455346   93587 retry.go:31] will retry after 425.458µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.456456   93587 provision.go:84] configureAuth start
	I0522 18:13:40.456517   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.473826   93587 provision.go:87] duration metric: took 17.346003ms to configureAuth
	W0522 18:13:40.473848   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.473864   93587 retry.go:31] will retry after 794.432µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.474977   93587 provision.go:84] configureAuth start
	I0522 18:13:40.475045   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.491066   93587 provision.go:87] duration metric: took 16.070525ms to configureAuth
	W0522 18:13:40.491088   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.491107   93587 retry.go:31] will retry after 1.614344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.493281   93587 provision.go:84] configureAuth start
	I0522 18:13:40.493345   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.508551   93587 provision.go:87] duration metric: took 15.254686ms to configureAuth
	W0522 18:13:40.508569   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.508587   93587 retry.go:31] will retry after 998.104µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.509712   93587 provision.go:84] configureAuth start
	I0522 18:13:40.509790   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.525006   93587 provision.go:87] duration metric: took 15.263842ms to configureAuth
	W0522 18:13:40.525024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.525042   93587 retry.go:31] will retry after 3.338034ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.529222   93587 provision.go:84] configureAuth start
	I0522 18:13:40.529282   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.544880   93587 provision.go:87] duration metric: took 15.639211ms to configureAuth
	W0522 18:13:40.544898   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.544922   93587 retry.go:31] will retry after 3.40783ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.549101   93587 provision.go:84] configureAuth start
	I0522 18:13:40.549153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.564670   93587 provision.go:87] duration metric: took 15.552453ms to configureAuth
	W0522 18:13:40.564691   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.564707   93587 retry.go:31] will retry after 7.302355ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.572891   93587 provision.go:84] configureAuth start
	I0522 18:13:40.572957   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.588884   93587 provision.go:87] duration metric: took 15.972307ms to configureAuth
	W0522 18:13:40.588903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.588921   93587 retry.go:31] will retry after 5.301531ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.595100   93587 provision.go:84] configureAuth start
	I0522 18:13:40.595153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.610191   93587 provision.go:87] duration metric: took 15.074227ms to configureAuth
	W0522 18:13:40.610211   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.610230   93587 retry.go:31] will retry after 11.026949ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.621370   93587 provision.go:84] configureAuth start
	I0522 18:13:40.621446   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.636327   93587 provision.go:87] duration metric: took 14.934708ms to configureAuth
	W0522 18:13:40.636340   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.636356   93587 retry.go:31] will retry after 25.960513ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.662569   93587 provision.go:84] configureAuth start
	I0522 18:13:40.662637   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.677809   93587 provision.go:87] duration metric: took 15.220921ms to configureAuth
	W0522 18:13:40.677824   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.677840   93587 retry.go:31] will retry after 32.75774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.711021   93587 provision.go:84] configureAuth start
	I0522 18:13:40.711093   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.726493   93587 provision.go:87] duration metric: took 15.45214ms to configureAuth
	W0522 18:13:40.726508   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.726524   93587 retry.go:31] will retry after 36.849589ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.763725   93587 provision.go:84] configureAuth start
	I0522 18:13:40.763797   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.779769   93587 provision.go:87] duration metric: took 16.019178ms to configureAuth
	W0522 18:13:40.779786   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.779806   93587 retry.go:31] will retry after 56.725665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.837004   93587 provision.go:84] configureAuth start
	I0522 18:13:40.837114   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.852417   93587 provision.go:87] duration metric: took 15.386685ms to configureAuth
	W0522 18:13:40.852435   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.852451   93587 retry.go:31] will retry after 111.712266ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.964732   93587 provision.go:84] configureAuth start
	I0522 18:13:40.964841   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.981335   93587 provision.go:87] duration metric: took 16.561934ms to configureAuth
	W0522 18:13:40.981354   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.981372   93587 retry.go:31] will retry after 119.589549ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.101655   93587 provision.go:84] configureAuth start
	I0522 18:13:41.101767   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.118304   93587 provision.go:87] duration metric: took 16.624114ms to configureAuth
	W0522 18:13:41.118332   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.118349   93587 retry.go:31] will retry after 172.20415ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.290646   93587 provision.go:84] configureAuth start
	I0522 18:13:41.290734   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.306781   93587 provision.go:87] duration metric: took 16.099389ms to configureAuth
	W0522 18:13:41.306799   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.306815   93587 retry.go:31] will retry after 467.479675ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.774386   93587 provision.go:84] configureAuth start
	I0522 18:13:41.774495   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.790035   93587 provision.go:87] duration metric: took 15.610421ms to configureAuth
	W0522 18:13:41.790054   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.790070   93587 retry.go:31] will retry after 663.257318ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.453817   93587 provision.go:84] configureAuth start
	I0522 18:13:42.453935   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.473961   93587 provision.go:87] duration metric: took 20.113537ms to configureAuth
	W0522 18:13:42.473982   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.473999   93587 retry.go:31] will retry after 453.336791ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.928400   93587 provision.go:84] configureAuth start
	I0522 18:13:42.928480   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.944835   93587 provision.go:87] duration metric: took 16.404983ms to configureAuth
	W0522 18:13:42.944858   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.944874   93587 retry.go:31] will retry after 1.661774658s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.607615   93587 provision.go:84] configureAuth start
	I0522 18:13:44.607723   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:44.623466   93587 provision.go:87] duration metric: took 15.817599ms to configureAuth
	W0522 18:13:44.623490   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.623506   93587 retry.go:31] will retry after 2.087899686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.711969   93587 provision.go:84] configureAuth start
	I0522 18:13:46.712058   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:46.728600   93587 provision.go:87] duration metric: took 16.596208ms to configureAuth
	W0522 18:13:46.728620   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.728636   93587 retry.go:31] will retry after 1.751255493s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.480034   93587 provision.go:84] configureAuth start
	I0522 18:13:48.480138   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:48.495909   93587 provision.go:87] duration metric: took 15.845589ms to configureAuth
	W0522 18:13:48.495927   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.495944   93587 retry.go:31] will retry after 3.216449309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.712476   93587 provision.go:84] configureAuth start
	I0522 18:13:51.712600   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:51.728675   93587 provision.go:87] duration metric: took 16.149731ms to configureAuth
	W0522 18:13:51.728694   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.728713   93587 retry.go:31] will retry after 4.442037166s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.171311   93587 provision.go:84] configureAuth start
	I0522 18:13:56.171390   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:56.188514   93587 provision.go:87] duration metric: took 17.174931ms to configureAuth
	W0522 18:13:56.188532   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.188548   93587 retry.go:31] will retry after 12.471520302s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.660614   93587 provision.go:84] configureAuth start
	I0522 18:14:08.660710   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:08.677166   93587 provision.go:87] duration metric: took 16.519042ms to configureAuth
	W0522 18:14:08.677185   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.677201   93587 retry.go:31] will retry after 10.952874884s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.630561   93587 provision.go:84] configureAuth start
	I0522 18:14:19.630655   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:19.646798   93587 provision.go:87] duration metric: took 16.206763ms to configureAuth
	W0522 18:14:19.646816   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.646833   93587 retry.go:31] will retry after 24.173560862s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.822465   93587 provision.go:84] configureAuth start
	I0522 18:14:43.822544   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:43.838993   93587 provision.go:87] duration metric: took 16.502247ms to configureAuth
	W0522 18:14:43.839013   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.839034   93587 retry.go:31] will retry after 18.866878171s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.707256   93587 provision.go:84] configureAuth start
	I0522 18:15:02.707363   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:15:02.723837   93587 provision.go:87] duration metric: took 16.544569ms to configureAuth
	W0522 18:15:02.723855   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723871   93587 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723880   93587 machine.go:97] duration metric: took 1m22.749059211s to provisionDockerMachine
	I0522 18:15:02.723935   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:02.723966   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:15:02.739583   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:15:02.819663   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:02.823565   93587 fix.go:56] duration metric: took 1m22.869456878s for fixHost
	I0522 18:15:02.823585   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m22.869501248s
	W0522 18:15:02.823659   93587 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.826139   93587 out.go:177] 
	W0522 18:15:02.827395   93587 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:15:02.827414   93587 out.go:239] * 
	W0522 18:15:02.828270   93587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:15:02.829647   93587 out.go:177] 
	
	
	==> Docker <==
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Start cri-dockerd grpc backend"
	May 22 18:11:47 ha-828033 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805\""
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"8b3fd8cf48c9595a338c9efce60e4362f1d6c5dbb5b1b6fc5065ecf200c879ff\""
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323\""
	May 22 18:11:47 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:47Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"91e8c76c71ae7c1fe51a6927a57419a0bf7e4407f4dfae175272d8249720d690\""
	May 22 18:11:48 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:48Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-nhhq2_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bc32c92f2fa0451f2154953804d41863edba21af2f870a0567808c1f52d63863\""
	May 22 18:11:49 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323\""
	May 22 18:11:49 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:49Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805\""
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8000bf1a7fc4656e0dd59a8380130ae669dcc99ddf50f4b850aadebaf819e82a/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62b9b95d560d3f4c398d1a0800ce2d38b51dadb37965750611c1be367b9ff131/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e74aa10cbe9d33593ac3b958980de129960dc253b0292c98b17d06624dfb56e/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a52b9affd7ecfddef5248fffdc613efe826704f0d0c9bf0a8342d00f941377c2/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3f8fe727d5f2c1146e1fe2d9c9a1c49e2e86e29cd0d349ecd531f66258d5d780/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:55 ha-828033 dockerd[970]: time="2024-05-22T18:11:55.075182620Z" level=info msg="ignoring event" container=d469550ed11073346f85aecf340fd55d6a1bd23fccb0d3496e1773d6793c357f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:06 ha-828033 dockerd[970]: time="2024-05-22T18:12:06.125876534Z" level=info msg="ignoring event" container=1f0cd4b45ad73df52277ec15e3c8091257ca9b647f16b05ebaa57c71d9953ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:14 ha-828033 dockerd[970]: time="2024-05-22T18:12:14.060598413Z" level=info msg="ignoring event" container=446c6c944e69a25e1fe27ab51e4fd11dcd15b6b81a5037862d168b220262e5f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:36 ha-828033 dockerd[970]: time="2024-05-22T18:12:36.060310284Z" level=info msg="ignoring event" container=4269ef0c2c8a7a3fcea382b926da4ca23d7da87055ab7f0addfb719f0249696c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:39 ha-828033 dockerd[970]: time="2024-05-22T18:12:39.765046273Z" level=info msg="ignoring event" container=40ce01696320ab4ed8408a6605f3618c6400d32591f6c48979e8b96427e8c7fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:13:23 ha-828033 dockerd[970]: time="2024-05-22T18:13:23.315716517Z" level=info msg="ignoring event" container=600416053fd79df7594b878da62c29ecb8d7258a9db9707e9e72070a3fbbbe37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:13:30 ha-828033 dockerd[970]: time="2024-05-22T18:13:30.061949319Z" level=info msg="ignoring event" container=084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:14:20 ha-828033 dockerd[970]: time="2024-05-22T18:14:20.737767317Z" level=info msg="ignoring event" container=b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:00 ha-828033 dockerd[970]: time="2024-05-22T18:15:00.062475870Z" level=info msg="ignoring event" container=9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b9d5449cf888       91be940803172                                                                                         4 seconds ago       Exited              kube-apiserver            5                   2e74aa10cbe9d       kube-apiserver-ha-828033
	b7e48a9d0a0d6       25a1387cdab82                                                                                         54 seconds ago      Exited              kube-controller-manager   4                   8000bf1a7fc46       kube-controller-manager-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         3 minutes ago       Running             kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              21 minutes ago      Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         21 minutes ago      Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         21 minutes ago      Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     21 minutes ago      Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         21 minutes ago      Exited              kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         21 minutes ago      Exited              etcd                      0                   ca6a020652c53       etcd-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:03.700386    3512 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	{"level":"info","ts":"2024-05-22T18:11:30.644098Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:11:30.644166Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:11:30.644253Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.644363Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.653966Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.654011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:11:30.654081Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:11:30.65579Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655929Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655974Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.960376Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:11:54.96051Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.960553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.963318Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:11:54.963634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:11:54.963677Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:11:54.963823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963876Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.964191Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:15:03 up 57 min,  0 users,  load average: 0.04, 0.30, 0.41
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9d5449cf88] <==
	I0522 18:15:00.047629       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:15:00.048450       1 server.go:148] Version: v1.30.1
	I0522 18:15:00.048494       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:15:00.048918       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [b7e48a9d0a0d] <==
	I0522 18:14:10.275710       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:14:10.706883       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:14:10.706906       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:14:10.708232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:14:10.708241       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:14:10.708399       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:14:10.708438       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:14:20.710272       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:28.926033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:31.465895       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:31.465961       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.214119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kube-scheduler [f457f32fdd43] <==
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:11:30.556935       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0522 18:11:30.557296       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0522 18:11:30.557341       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:14:47 ha-828033 kubelet[1423]: E0522 18:14:47.977441    1423 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:14:48 ha-828033 kubelet[1423]: I0522 18:14:48.918150    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:14:48 ha-828033 kubelet[1423]: E0522 18:14:48.918564    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:14:49 ha-828033 kubelet[1423]: W0522 18:14:49.119630    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-828033&limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:49 ha-828033 kubelet[1423]: E0522 18:14:49.119730    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-828033&limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:52 ha-828033 kubelet[1423]: W0522 18:14:52.195595    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:52 ha-828033 kubelet[1423]: E0522 18:14:52.195689    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.049137    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.918455    1423 scope.go:117] "RemoveContainer" containerID="b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7"
	May 22 18:14:53 ha-828033 kubelet[1423]: E0522 18:14:53.918789    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263739    1423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263702    1423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e213dd1784ea  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,LastTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263796    1423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:14:57 ha-828033 kubelet[1423]: E0522 18:14:57.978048    1423 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:14:58 ha-828033 kubelet[1423]: W0522 18:14:58.335640    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:58 ha-828033 kubelet[1423]: E0522 18:14:58.335719    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:59 ha-828033 kubelet[1423]: I0522 18:14:59.918050    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.644461    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.645423    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:00 ha-828033 kubelet[1423]: E0522 18:15:00.645909    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:01 ha-828033 kubelet[1423]: I0522 18:15:01.656460    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:01 ha-828033 kubelet[1423]: E0522 18:15:01.656842    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.264674    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.664649    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:02 ha-828033 kubelet[1423]: E0522 18:15:02.665058    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (241.688169ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (215.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 node delete m03 -v=7 --alsologtostderr: exit status 50 (124.225295ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:15:04.298783   99935 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:04.299040   99935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:04.299048   99935 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:04.299053   99935 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:04.299242   99935 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:04.299503   99935 mustload.go:65] Loading cluster: ha-828033
	I0522 18:15:04.299819   99935 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:04.300150   99935 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:04.316126   99935 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:04.316384   99935 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:04.361272   99935 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:15:04.352703149 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:04.361621   99935 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:15:04.377584   99935 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:15:04.379888   99935 out.go:177] 
	W0522 18:15:04.380944   99935 out.go:239] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	W0522 18:15:04.380993   99935 out.go:239] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I0522 18:15:04.382097   99935 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:489: node delete returned an error. args "out/minikube-linux-amd64 -p ha-828033 node delete m03 -v=7 --alsologtostderr": exit status 50
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (269.803324ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-828033-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:15:04.421663  100010 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:04.421894  100010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:04.421902  100010 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:04.421906  100010 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:04.422081  100010 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:04.422229  100010 out.go:298] Setting JSON to false
	I0522 18:15:04.422253  100010 mustload.go:65] Loading cluster: ha-828033
	I0522 18:15:04.422374  100010 notify.go:220] Checking for updates...
	I0522 18:15:04.422541  100010 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:04.422553  100010 status.go:255] checking status of ha-828033 ...
	I0522 18:15:04.422903  100010 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:04.438860  100010 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:15:04.438884  100010 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:04.439087  100010 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:04.454548  100010 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:04.454750  100010 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:04.454786  100010 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:04.470195  100010 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:04.551801  100010 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:04.555615  100010 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:15:04.565329  100010 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:04.610774  100010 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:15:04.602506249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:04.611318  100010 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:15:04.611347  100010 api_server.go:166] Checking apiserver status ...
	I0522 18:15:04.611375  100010 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0522 18:15:04.620828  100010 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:04.620846  100010 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:15:04.620861  100010 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:15:04.620876  100010 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:15:04.621167  100010 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:15:04.636936  100010 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:15:04.636957  100010 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:15:04.637194  100010 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:15:04.652345  100010 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:15:04.652368  100010 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:15:04.652381  100010 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:11:41.545152719Z",
	            "FinishedAt": "2024-05-22T18:11:40.848620795Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d0c98ea9130cbb800c462fe8803bee586edca8539288200e46ac88b3b024b2",
	            "SandboxKey": "/var/run/docker/netns/30d0c98ea913",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "2ba8cb80c659667fb6bda12680449f8c1464b6ce638e2e5d144c21ea7f6d07eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (237.736725ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:11:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:11:41.134703   93587 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:41.134930   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.134938   93587 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:41.134942   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.135123   93587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:41.135663   93587 out.go:298] Setting JSON to false
	I0522 18:11:41.136597   93587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3245,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:11:41.136652   93587 start.go:139] virtualization: kvm guest
	I0522 18:11:41.138603   93587 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:11:41.139872   93587 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:11:41.139877   93587 notify.go:220] Checking for updates...
	I0522 18:11:41.141388   93587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:11:41.142594   93587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:41.143720   93587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:11:41.144893   93587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:11:41.145865   93587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:11:41.147279   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:41.147391   93587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:11:41.167202   93587 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:11:41.167354   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.213981   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.205284379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.214076   93587 docker.go:295] overlay module found
	I0522 18:11:41.216233   93587 out.go:177] * Using the docker driver based on existing profile
	I0522 18:11:41.217269   93587 start.go:297] selected driver: docker
	I0522 18:11:41.217284   93587 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.217363   93587 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:11:41.217435   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.262537   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.253560233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.263171   93587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:11:41.263204   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:41.263213   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:41.263260   93587 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.265782   93587 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:11:41.266790   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:11:41.267878   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:11:41.268972   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:41.268999   93587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:11:41.268994   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:11:41.269006   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:11:41.269151   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:11:41.269173   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:11:41.269261   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.283614   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:11:41.283635   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:11:41.283654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:11:41.283689   93587 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:11:41.283753   93587 start.go:364] duration metric: took 41.779µs to acquireMachinesLock for "ha-828033"
	I0522 18:11:41.283775   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:11:41.283786   93587 fix.go:54] fixHost starting: 
	I0522 18:11:41.283991   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.299535   93587 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:11:41.299560   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:11:41.301277   93587 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:11:41.302545   93587 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:11:41.550741   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.567723   93587 kic.go:430] container "ha-828033" state is running.
	I0522 18:11:41.568146   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:41.584785   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.585001   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:11:41.585061   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:41.601067   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:41.601257   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:41.601268   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:11:41.601940   93587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58778->127.0.0.1:32807: read: connection reset by peer
	I0522 18:11:44.714380   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.714404   93587 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:11:44.714459   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.731671   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.731883   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.731902   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:11:44.852943   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.853043   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.869576   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.869790   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.869817   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:11:44.979057   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:44.979089   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:11:44.979116   93587 ubuntu.go:177] setting up certificates
	I0522 18:11:44.979134   93587 provision.go:84] configureAuth start
	I0522 18:11:44.979199   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:44.994933   93587 provision.go:143] copyHostCerts
	I0522 18:11:44.994969   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995017   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:11:44.995033   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995108   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:11:44.995224   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995252   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:11:44.995259   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995322   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:11:44.995400   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995422   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:11:44.995429   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995474   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:11:44.995562   93587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:11:45.135697   93587 provision.go:177] copyRemoteCerts
	I0522 18:11:45.135763   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:11:45.135818   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.152921   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.238902   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:11:45.238973   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:11:45.258885   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:11:45.258948   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0522 18:11:45.278444   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:11:45.278494   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:11:45.297780   93587 provision.go:87] duration metric: took 318.629986ms to configureAuth
	I0522 18:11:45.297808   93587 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:11:45.297962   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:45.298004   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.313749   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.313923   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.313939   93587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:11:45.427468   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:11:45.427494   93587 ubuntu.go:71] root file system type: overlay
	I0522 18:11:45.427580   93587 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:11:45.427626   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.444225   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.444413   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.444506   93587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:11:45.564594   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:11:45.564669   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.580720   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.580903   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.580920   93587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:11:45.695828   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:45.695857   93587 machine.go:97] duration metric: took 4.110841908s to provisionDockerMachine
	I0522 18:11:45.695867   93587 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:11:45.695877   93587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:11:45.695924   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:11:45.695955   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.712232   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.795493   93587 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:11:45.798393   93587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:11:45.798434   93587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:11:45.798444   93587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:11:45.798453   93587 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:11:45.798471   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:11:45.798511   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:11:45.798590   93587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:11:45.798602   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:11:45.798690   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:11:45.806167   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:45.826168   93587 start.go:296] duration metric: took 130.28741ms for postStartSetup
	I0522 18:11:45.826240   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:45.826284   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.842515   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.923712   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:11:45.927618   93587 fix.go:56] duration metric: took 4.643832098s for fixHost
	I0522 18:11:45.927656   93587 start.go:83] releasing machines lock for "ha-828033", held for 4.643887227s
	I0522 18:11:45.927713   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:45.944156   93587 ssh_runner.go:195] Run: cat /version.json
	I0522 18:11:45.944201   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.944235   93587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:11:45.944288   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.962364   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.962780   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:46.042681   93587 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:46.109435   93587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:11:46.113688   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:11:46.129549   93587 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:11:46.129616   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:11:46.137374   93587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:11:46.138397   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.138424   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.138550   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.152035   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:11:46.160068   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:11:46.168623   93587 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.168674   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:11:46.177246   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.185321   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:11:46.193307   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.201602   93587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:11:46.209350   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:11:46.217593   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:11:46.225824   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:11:46.234419   93587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:11:46.241490   93587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:11:46.248503   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.323097   93587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:11:46.411392   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.411434   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.411494   93587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:11:46.422471   93587 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:11:46.422535   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:11:46.433407   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.449148   93587 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:11:46.452464   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:11:46.460126   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:11:46.477806   93587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:11:46.581019   93587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:11:46.682974   93587 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.683118   93587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:11:46.699398   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.783890   93587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:11:47.043450   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:11:47.053302   93587 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:11:47.063710   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.072923   93587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:11:47.142683   93587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:11:47.222920   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.298978   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:11:47.310891   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.320183   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.395538   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:11:47.457881   93587 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:11:47.457934   93587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:11:47.461279   93587 start.go:562] Will wait 60s for crictl version
	I0522 18:11:47.461343   93587 ssh_runner.go:195] Run: which crictl
	I0522 18:11:47.464606   93587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:11:47.495432   93587 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:11:47.495495   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.517256   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.541495   93587 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:11:47.541571   93587 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:11:47.557260   93587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:11:47.560496   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.570471   93587 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:11:47.570586   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:47.570631   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.587878   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.587899   93587 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:11:47.587950   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.606514   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.606541   93587 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:11:47.606558   93587 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:11:47.606687   93587 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:11:47.606735   93587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:11:47.652790   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:47.652807   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:47.652824   93587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:11:47.652857   93587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:11:47.652974   93587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:11:47.652992   93587 kube-vip.go:115] generating kube-vip config ...
	I0522 18:11:47.653024   93587 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:11:47.663570   93587 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:47.663661   93587 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:11:47.663702   93587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:11:47.671164   93587 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:11:47.671218   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:11:47.678433   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:11:47.693280   93587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:11:47.707810   93587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:11:47.722391   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:11:47.737026   93587 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:11:47.739845   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.748775   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.823891   93587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:11:47.835577   93587 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:11:47.835598   93587 certs.go:194] generating shared ca certs ...
	I0522 18:11:47.835613   93587 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:47.835758   93587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:11:47.835842   93587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:11:47.835862   93587 certs.go:256] generating profile certs ...
	I0522 18:11:47.835960   93587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:11:47.835985   93587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:11:47.836008   93587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:11:48.121096   93587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:11:48.121121   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121275   93587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:11:48.121287   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121352   93587 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:11:48.121491   93587 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:11:48.121607   93587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:11:48.121622   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:11:48.121634   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:11:48.121647   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:11:48.121659   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:11:48.121671   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:11:48.121684   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:11:48.121695   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:11:48.121706   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:11:48.121761   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:11:48.121786   93587 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:11:48.121796   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:11:48.121824   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:11:48.121846   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:11:48.121868   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:11:48.121906   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:48.121932   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.121947   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.121963   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.122488   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:11:48.143159   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:11:48.162787   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:11:48.182506   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:11:48.201936   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:11:48.221464   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:11:48.240723   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:11:48.260323   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:11:48.279765   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:11:48.299293   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:11:48.318925   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:11:48.338728   93587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:11:48.353309   93587 ssh_runner.go:195] Run: openssl version
	I0522 18:11:48.358049   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:11:48.365885   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368779   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368829   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.374835   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:11:48.382122   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:11:48.389749   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392543   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392586   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.400682   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:11:48.407800   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:11:48.415568   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418291   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418342   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.424132   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:11:48.431192   93587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:11:48.433941   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:11:48.439661   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:11:48.445338   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:11:48.451065   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:11:48.456627   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:11:48.461988   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:11:48.467384   93587 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:48.467494   93587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:11:48.485081   93587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:11:48.492968   93587 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:11:48.492987   93587 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:11:48.492994   93587 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:11:48.493030   93587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:11:48.500158   93587 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:48.500524   93587 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.500622   93587 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:11:48.500860   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.501224   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.501415   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.501829   93587 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:11:48.502116   93587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:11:48.509165   93587 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:11:48.509192   93587 kubeadm.go:591] duration metric: took 16.193394ms to restartPrimaryControlPlane
	I0522 18:11:48.509203   93587 kubeadm.go:393] duration metric: took 41.824441ms to StartCluster
	I0522 18:11:48.509229   93587 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.509281   93587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.509984   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.510194   93587 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:11:48.510219   93587 start.go:240] waiting for startup goroutines ...
	I0522 18:11:48.510231   93587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:11:48.510288   93587 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:11:48.510308   93587 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:11:48.510350   93587 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 18:11:48.510358   93587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	W0522 18:11:48.510362   93587 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:11:48.510372   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:48.510392   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.510671   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.510833   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.531981   93587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:11:48.529656   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.532267   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.533374   93587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.533470   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:11:48.533514   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.533609   93587 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:11:48.533626   93587 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:11:48.533656   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.533986   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.549936   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.550918   93587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:11:48.550941   93587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:11:48.550989   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.567412   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.643338   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.658967   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.695623   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.695654   93587 retry.go:31] will retry after 143.566199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:11:48.710095   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.710122   93587 retry.go:31] will retry after 196.09206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.839382   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:48.889703   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.889737   93587 retry.go:31] will retry after 405.6758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.906883   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.957678   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.957706   93587 retry.go:31] will retry after 481.984617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.296239   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.346745   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.346776   93587 retry.go:31] will retry after 298.316645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.439941   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.490892   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.490924   93587 retry.go:31] will retry after 365.174941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.646180   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.695995   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.696026   93587 retry.go:31] will retry after 622.662088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.856274   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.908213   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.908240   93587 retry.go:31] will retry after 465.598462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.319768   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:50.370352   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.370393   93587 retry.go:31] will retry after 1.153542566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.374493   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:50.427342   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.427370   93587 retry.go:31] will retry after 1.760070779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.524500   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:51.576096   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.576127   93587 retry.go:31] will retry after 1.395298614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.187677   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:52.238330   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.238363   93587 retry.go:31] will retry after 2.838643955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.972468   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:53.024864   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:53.024894   93587 retry.go:31] will retry after 3.988192679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.078985   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:55.254504   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.254547   93587 retry.go:31] will retry after 1.898473733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.013394   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:57.065110   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.065143   93587 retry.go:31] will retry after 3.026639765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.153313   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:57.205183   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.205216   93587 retry.go:31] will retry after 4.512900176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.093267   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:00.144874   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.144907   93587 retry.go:31] will retry after 4.624822439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.718976   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:01.770260   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.770289   93587 retry.go:31] will retry after 6.597322484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.770613   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:04.821736   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.821765   93587 retry.go:31] will retry after 6.276558674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.369690   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:08.421665   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.421695   93587 retry.go:31] will retry after 4.88361876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.099397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:11.150176   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.150214   93587 retry.go:31] will retry after 14.618513106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.307405   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:13.358292   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.358325   93587 retry.go:31] will retry after 11.702428572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.064329   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:25.116230   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.116269   93587 retry.go:31] will retry after 20.635119238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.768934   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:25.819335   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.819366   93587 retry.go:31] will retry after 22.551209597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.755397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:45.807295   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.807335   93587 retry.go:31] will retry after 48.223563966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.371303   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:48.422526   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.422554   93587 retry.go:31] will retry after 21.925283254s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:13:10.348911   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:13:10.401430   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:10.401550   93587 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.031408   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:13:34.084103   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:34.084199   93587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.087605   93587 out.go:177] * Enabled addons: 
	I0522 18:13:34.092136   93587 addons.go:505] duration metric: took 1m45.581904576s for enable addons: enabled=[]
	I0522 18:13:34.092168   93587 start.go:245] waiting for cluster config update ...
	I0522 18:13:34.092175   93587 start.go:254] writing updated cluster config ...
	I0522 18:13:34.093767   93587 out.go:177] 
	I0522 18:13:34.094950   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:13:34.095010   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.096476   93587 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:13:34.097816   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:13:34.098828   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:13:34.099818   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:13:34.099834   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:13:34.099879   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:13:34.099916   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:13:34.099930   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:13:34.100028   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.116605   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:13:34.116636   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:13:34.116654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:13:34.116685   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:34.116739   93587 start.go:364] duration metric: took 36.742µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:34.116754   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:34.116759   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:34.116975   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.131815   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:13:34.131835   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:34.133519   93587 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:13:34.134577   93587 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:13:34.386505   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.403758   93587 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:13:34.404176   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:34.421199   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:13:34.421255   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:34.437668   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:13:34.438642   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.438697   93587 retry.go:31] will retry after 159.621723ms: ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	W0522 18:13:34.599398   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.599427   93587 retry.go:31] will retry after 217.688969ms: ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.948280   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:13:34.952728   93587 fix.go:56] duration metric: took 835.959949ms for fixHost
	I0522 18:13:34.952759   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 836.005567ms
	W0522 18:13:34.952776   93587 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:13:34.952870   93587 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:13:34.952882   93587 start.go:728] Will try again in 5 seconds ...
	I0522 18:13:39.953931   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:39.954069   93587 start.go:364] duration metric: took 66.237µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:39.954098   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:39.954106   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:39.954430   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:39.971326   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:13:39.971351   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:39.973352   93587 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:13:39.974806   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:13:39.974895   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:39.990164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:39.990366   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:39.990382   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:13:40.106411   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.106441   93587 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:13:40.106497   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.123164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.123396   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.123412   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:13:40.245387   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.245458   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.262355   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.262539   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.262563   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:13:40.375115   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:13:40.375140   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:13:40.375156   93587 ubuntu.go:177] setting up certificates
	I0522 18:13:40.375167   93587 provision.go:84] configureAuth start
	I0522 18:13:40.375212   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.390878   93587 provision.go:87] duration metric: took 15.702592ms to configureAuth
	W0522 18:13:40.390903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.390928   93587 retry.go:31] will retry after 70.356µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.392042   93587 provision.go:84] configureAuth start
	I0522 18:13:40.392097   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.408007   93587 provision.go:87] duration metric: took 15.947883ms to configureAuth
	W0522 18:13:40.408024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.408044   93587 retry.go:31] will retry after 137.47µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.409151   93587 provision.go:84] configureAuth start
	I0522 18:13:40.409201   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.423891   93587 provision.go:87] duration metric: took 14.725235ms to configureAuth
	W0522 18:13:40.423909   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.423925   93587 retry.go:31] will retry after 262.374µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.425034   93587 provision.go:84] configureAuth start
	I0522 18:13:40.425086   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.439293   93587 provision.go:87] duration metric: took 14.241319ms to configureAuth
	W0522 18:13:40.439314   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.439330   93587 retry.go:31] will retry after 298.899µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.440439   93587 provision.go:84] configureAuth start
	I0522 18:13:40.440498   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.455314   93587 provision.go:87] duration metric: took 14.857395ms to configureAuth
	W0522 18:13:40.455331   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.455346   93587 retry.go:31] will retry after 425.458µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.456456   93587 provision.go:84] configureAuth start
	I0522 18:13:40.456517   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.473826   93587 provision.go:87] duration metric: took 17.346003ms to configureAuth
	W0522 18:13:40.473848   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.473864   93587 retry.go:31] will retry after 794.432µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.474977   93587 provision.go:84] configureAuth start
	I0522 18:13:40.475045   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.491066   93587 provision.go:87] duration metric: took 16.070525ms to configureAuth
	W0522 18:13:40.491088   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.491107   93587 retry.go:31] will retry after 1.614344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.493281   93587 provision.go:84] configureAuth start
	I0522 18:13:40.493345   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.508551   93587 provision.go:87] duration metric: took 15.254686ms to configureAuth
	W0522 18:13:40.508569   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.508587   93587 retry.go:31] will retry after 998.104µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.509712   93587 provision.go:84] configureAuth start
	I0522 18:13:40.509790   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.525006   93587 provision.go:87] duration metric: took 15.263842ms to configureAuth
	W0522 18:13:40.525024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.525042   93587 retry.go:31] will retry after 3.338034ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.529222   93587 provision.go:84] configureAuth start
	I0522 18:13:40.529282   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.544880   93587 provision.go:87] duration metric: took 15.639211ms to configureAuth
	W0522 18:13:40.544898   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.544922   93587 retry.go:31] will retry after 3.40783ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.549101   93587 provision.go:84] configureAuth start
	I0522 18:13:40.549153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.564670   93587 provision.go:87] duration metric: took 15.552453ms to configureAuth
	W0522 18:13:40.564691   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.564707   93587 retry.go:31] will retry after 7.302355ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.572891   93587 provision.go:84] configureAuth start
	I0522 18:13:40.572957   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.588884   93587 provision.go:87] duration metric: took 15.972307ms to configureAuth
	W0522 18:13:40.588903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.588921   93587 retry.go:31] will retry after 5.301531ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.595100   93587 provision.go:84] configureAuth start
	I0522 18:13:40.595153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.610191   93587 provision.go:87] duration metric: took 15.074227ms to configureAuth
	W0522 18:13:40.610211   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.610230   93587 retry.go:31] will retry after 11.026949ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.621370   93587 provision.go:84] configureAuth start
	I0522 18:13:40.621446   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.636327   93587 provision.go:87] duration metric: took 14.934708ms to configureAuth
	W0522 18:13:40.636340   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.636356   93587 retry.go:31] will retry after 25.960513ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.662569   93587 provision.go:84] configureAuth start
	I0522 18:13:40.662637   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.677809   93587 provision.go:87] duration metric: took 15.220921ms to configureAuth
	W0522 18:13:40.677824   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.677840   93587 retry.go:31] will retry after 32.75774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.711021   93587 provision.go:84] configureAuth start
	I0522 18:13:40.711093   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.726493   93587 provision.go:87] duration metric: took 15.45214ms to configureAuth
	W0522 18:13:40.726508   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.726524   93587 retry.go:31] will retry after 36.849589ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.763725   93587 provision.go:84] configureAuth start
	I0522 18:13:40.763797   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.779769   93587 provision.go:87] duration metric: took 16.019178ms to configureAuth
	W0522 18:13:40.779786   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.779806   93587 retry.go:31] will retry after 56.725665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.837004   93587 provision.go:84] configureAuth start
	I0522 18:13:40.837114   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.852417   93587 provision.go:87] duration metric: took 15.386685ms to configureAuth
	W0522 18:13:40.852435   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.852451   93587 retry.go:31] will retry after 111.712266ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.964732   93587 provision.go:84] configureAuth start
	I0522 18:13:40.964841   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.981335   93587 provision.go:87] duration metric: took 16.561934ms to configureAuth
	W0522 18:13:40.981354   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.981372   93587 retry.go:31] will retry after 119.589549ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.101655   93587 provision.go:84] configureAuth start
	I0522 18:13:41.101767   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.118304   93587 provision.go:87] duration metric: took 16.624114ms to configureAuth
	W0522 18:13:41.118332   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.118349   93587 retry.go:31] will retry after 172.20415ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.290646   93587 provision.go:84] configureAuth start
	I0522 18:13:41.290734   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.306781   93587 provision.go:87] duration metric: took 16.099389ms to configureAuth
	W0522 18:13:41.306799   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.306815   93587 retry.go:31] will retry after 467.479675ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.774386   93587 provision.go:84] configureAuth start
	I0522 18:13:41.774495   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.790035   93587 provision.go:87] duration metric: took 15.610421ms to configureAuth
	W0522 18:13:41.790054   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.790070   93587 retry.go:31] will retry after 663.257318ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.453817   93587 provision.go:84] configureAuth start
	I0522 18:13:42.453935   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.473961   93587 provision.go:87] duration metric: took 20.113537ms to configureAuth
	W0522 18:13:42.473982   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.473999   93587 retry.go:31] will retry after 453.336791ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.928400   93587 provision.go:84] configureAuth start
	I0522 18:13:42.928480   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.944835   93587 provision.go:87] duration metric: took 16.404983ms to configureAuth
	W0522 18:13:42.944858   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.944874   93587 retry.go:31] will retry after 1.661774658s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.607615   93587 provision.go:84] configureAuth start
	I0522 18:13:44.607723   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:44.623466   93587 provision.go:87] duration metric: took 15.817599ms to configureAuth
	W0522 18:13:44.623490   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.623506   93587 retry.go:31] will retry after 2.087899686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.711969   93587 provision.go:84] configureAuth start
	I0522 18:13:46.712058   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:46.728600   93587 provision.go:87] duration metric: took 16.596208ms to configureAuth
	W0522 18:13:46.728620   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.728636   93587 retry.go:31] will retry after 1.751255493s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.480034   93587 provision.go:84] configureAuth start
	I0522 18:13:48.480138   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:48.495909   93587 provision.go:87] duration metric: took 15.845589ms to configureAuth
	W0522 18:13:48.495927   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.495944   93587 retry.go:31] will retry after 3.216449309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.712476   93587 provision.go:84] configureAuth start
	I0522 18:13:51.712600   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:51.728675   93587 provision.go:87] duration metric: took 16.149731ms to configureAuth
	W0522 18:13:51.728694   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.728713   93587 retry.go:31] will retry after 4.442037166s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.171311   93587 provision.go:84] configureAuth start
	I0522 18:13:56.171390   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:56.188514   93587 provision.go:87] duration metric: took 17.174931ms to configureAuth
	W0522 18:13:56.188532   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.188548   93587 retry.go:31] will retry after 12.471520302s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.660614   93587 provision.go:84] configureAuth start
	I0522 18:14:08.660710   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:08.677166   93587 provision.go:87] duration metric: took 16.519042ms to configureAuth
	W0522 18:14:08.677185   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.677201   93587 retry.go:31] will retry after 10.952874884s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.630561   93587 provision.go:84] configureAuth start
	I0522 18:14:19.630655   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:19.646798   93587 provision.go:87] duration metric: took 16.206763ms to configureAuth
	W0522 18:14:19.646816   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.646833   93587 retry.go:31] will retry after 24.173560862s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.822465   93587 provision.go:84] configureAuth start
	I0522 18:14:43.822544   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:43.838993   93587 provision.go:87] duration metric: took 16.502247ms to configureAuth
	W0522 18:14:43.839013   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.839034   93587 retry.go:31] will retry after 18.866878171s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.707256   93587 provision.go:84] configureAuth start
	I0522 18:15:02.707363   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:15:02.723837   93587 provision.go:87] duration metric: took 16.544569ms to configureAuth
	W0522 18:15:02.723855   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723871   93587 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723880   93587 machine.go:97] duration metric: took 1m22.749059211s to provisionDockerMachine
	I0522 18:15:02.723935   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:02.723966   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:15:02.739583   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:15:02.819663   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:02.823565   93587 fix.go:56] duration metric: took 1m22.869456878s for fixHost
	I0522 18:15:02.823585   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m22.869501248s
	W0522 18:15:02.823659   93587 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.826139   93587 out.go:177] 
	W0522 18:15:02.827395   93587 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:15:02.827414   93587 out.go:239] * 
	W0522 18:15:02.828270   93587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:15:02.829647   93587 out.go:177] 
	
	
	==> Docker <==
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/8000bf1a7fc4656e0dd59a8380130ae669dcc99ddf50f4b850aadebaf819e82a/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/62b9b95d560d3f4c398d1a0800ce2d38b51dadb37965750611c1be367b9ff131/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2e74aa10cbe9d33593ac3b958980de129960dc253b0292c98b17d06624dfb56e/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a52b9affd7ecfddef5248fffdc613efe826704f0d0c9bf0a8342d00f941377c2/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:11:54 ha-828033 cri-dockerd[1211]: time="2024-05-22T18:11:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3f8fe727d5f2c1146e1fe2d9c9a1c49e2e86e29cd0d349ecd531f66258d5d780/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:11:55 ha-828033 dockerd[970]: time="2024-05-22T18:11:55.075182620Z" level=info msg="ignoring event" container=d469550ed11073346f85aecf340fd55d6a1bd23fccb0d3496e1773d6793c357f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:06 ha-828033 dockerd[970]: time="2024-05-22T18:12:06.125876534Z" level=info msg="ignoring event" container=1f0cd4b45ad73df52277ec15e3c8091257ca9b647f16b05ebaa57c71d9953ccb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:14 ha-828033 dockerd[970]: time="2024-05-22T18:12:14.060598413Z" level=info msg="ignoring event" container=446c6c944e69a25e1fe27ab51e4fd11dcd15b6b81a5037862d168b220262e5f4 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:36 ha-828033 dockerd[970]: time="2024-05-22T18:12:36.060310284Z" level=info msg="ignoring event" container=4269ef0c2c8a7a3fcea382b926da4ca23d7da87055ab7f0addfb719f0249696c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:12:39 ha-828033 dockerd[970]: time="2024-05-22T18:12:39.765046273Z" level=info msg="ignoring event" container=40ce01696320ab4ed8408a6605f3618c6400d32591f6c48979e8b96427e8c7fe module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:13:23 ha-828033 dockerd[970]: time="2024-05-22T18:13:23.315716517Z" level=info msg="ignoring event" container=600416053fd79df7594b878da62c29ecb8d7258a9db9707e9e72070a3fbbbe37 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:13:30 ha-828033 dockerd[970]: time="2024-05-22T18:13:30.061949319Z" level=info msg="ignoring event" container=084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:14:20 ha-828033 dockerd[970]: time="2024-05-22T18:14:20.737767317Z" level=info msg="ignoring event" container=b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:00 ha-828033 dockerd[970]: time="2024-05-22T18:15:00.062475870Z" level=info msg="ignoring event" container=9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b9d5449cf888       91be940803172                                                                                         6 seconds ago       Exited              kube-apiserver            5                   2e74aa10cbe9d       kube-apiserver-ha-828033
	b7e48a9d0a0d6       25a1387cdab82                                                                                         56 seconds ago      Exited              kube-controller-manager   4                   8000bf1a7fc46       kube-controller-manager-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         3 minutes ago       Running             kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              21 minutes ago      Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         21 minutes ago      Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         21 minutes ago      Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     21 minutes ago      Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         22 minutes ago      Exited              kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         22 minutes ago      Exited              etcd                      0                   ca6a020652c53       etcd-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:05.422079    3772 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	{"level":"info","ts":"2024-05-22T18:11:30.644098Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:11:30.644166Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:11:30.644253Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.644363Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.653966Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.654011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:11:30.654081Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:11:30.65579Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655929Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655974Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.960376Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:11:54.96051Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.960553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.963318Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:11:54.963634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:11:54.963677Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:11:54.963823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963876Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.964191Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:15:05 up 57 min,  0 users,  load average: 0.20, 0.33, 0.42
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9d5449cf88] <==
	I0522 18:15:00.047629       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:15:00.048450       1 server.go:148] Version: v1.30.1
	I0522 18:15:00.048494       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:15:00.048918       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [b7e48a9d0a0d] <==
	I0522 18:14:10.275710       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:14:10.706883       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:14:10.706906       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:14:10.708232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:14:10.708241       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:14:10.708399       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:14:10.708438       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:14:20.710272       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:28.926033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:31.465895       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:31.465961       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.214119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kube-scheduler [f457f32fdd43] <==
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:11:30.556935       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0522 18:11:30.557296       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0522 18:11:30.557341       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:14:48 ha-828033 kubelet[1423]: E0522 18:14:48.918564    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:14:49 ha-828033 kubelet[1423]: W0522 18:14:49.119630    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-828033&limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:49 ha-828033 kubelet[1423]: E0522 18:14:49.119730    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-828033&limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:52 ha-828033 kubelet[1423]: W0522 18:14:52.195595    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:52 ha-828033 kubelet[1423]: E0522 18:14:52.195689    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.049137    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.918455    1423 scope.go:117] "RemoveContainer" containerID="b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7"
	May 22 18:14:53 ha-828033 kubelet[1423]: E0522 18:14:53.918789    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263739    1423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263702    1423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e213dd1784ea  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,LastTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263796    1423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:14:57 ha-828033 kubelet[1423]: E0522 18:14:57.978048    1423 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:14:58 ha-828033 kubelet[1423]: W0522 18:14:58.335640    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:58 ha-828033 kubelet[1423]: E0522 18:14:58.335719    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:59 ha-828033 kubelet[1423]: I0522 18:14:59.918050    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.644461    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.645423    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:00 ha-828033 kubelet[1423]: E0522 18:15:00.645909    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:01 ha-828033 kubelet[1423]: I0522 18:15:01.656460    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:01 ha-828033 kubelet[1423]: E0522 18:15:01.656842    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.264674    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.664649    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:02 ha-828033 kubelet[1423]: E0522 18:15:02.665058    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:04 ha-828033 kubelet[1423]: E0522 18:15:04.479543    1423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:15:04 ha-828033 kubelet[1423]: E0522 18:15:04.479560    1423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (245.175461ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-828033" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSha
resRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}
,{\"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientP
ath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 93862,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:11:41.545152719Z",
	            "FinishedAt": "2024-05-22T18:11:40.848620795Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d0c98ea9130cbb800c462fe8803bee586edca8539288200e46ac88b3b024b2",
	            "SandboxKey": "/var/run/docker/netns/30d0c98ea913",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32807"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32806"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32803"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32805"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32804"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "2ba8cb80c659667fb6bda12680449f8c1464b6ce638e2e5d144c21ea7f6d07eb",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (242.21243ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | busybox-fc5497c4f-nhhq2 -- sh                                                    |           |         |         |                     |                     |
	|         | -c ping -c 1 192.168.49.1                                                        |           |         |         |                     |                     |
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:11:41
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:11:41.134703   93587 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:11:41.134930   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.134938   93587 out.go:304] Setting ErrFile to fd 2...
	I0522 18:11:41.134942   93587 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:11:41.135123   93587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:11:41.135663   93587 out.go:298] Setting JSON to false
	I0522 18:11:41.136597   93587 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3245,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:11:41.136652   93587 start.go:139] virtualization: kvm guest
	I0522 18:11:41.138603   93587 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:11:41.139872   93587 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:11:41.139877   93587 notify.go:220] Checking for updates...
	I0522 18:11:41.141388   93587 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:11:41.142594   93587 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:41.143720   93587 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:11:41.144893   93587 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:11:41.145865   93587 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:11:41.147279   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:41.147391   93587 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:11:41.167202   93587 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:11:41.167354   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.213981   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.205284379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.214076   93587 docker.go:295] overlay module found
	I0522 18:11:41.216233   93587 out.go:177] * Using the docker driver based on existing profile
	I0522 18:11:41.217269   93587 start.go:297] selected driver: docker
	I0522 18:11:41.217284   93587 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.217363   93587 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:11:41.217435   93587 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:11:41.262537   93587 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:11:41.253560233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:11:41.263171   93587 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:11:41.263204   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:41.263213   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:41.263260   93587 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:41.265782   93587 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:11:41.266790   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:11:41.267878   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:11:41.268972   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:41.268999   93587 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:11:41.268994   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:11:41.269006   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:11:41.269151   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:11:41.269173   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:11:41.269261   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.283614   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:11:41.283635   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:11:41.283654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:11:41.283689   93587 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:11:41.283753   93587 start.go:364] duration metric: took 41.779µs to acquireMachinesLock for "ha-828033"
	I0522 18:11:41.283775   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:11:41.283786   93587 fix.go:54] fixHost starting: 
	I0522 18:11:41.283991   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.299535   93587 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:11:41.299560   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:11:41.301277   93587 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:11:41.302545   93587 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:11:41.550741   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:41.567723   93587 kic.go:430] container "ha-828033" state is running.
	I0522 18:11:41.568146   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:41.584785   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:11:41.585001   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:11:41.585061   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:41.601067   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:41.601257   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:41.601268   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:11:41.601940   93587 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58778->127.0.0.1:32807: read: connection reset by peer
	I0522 18:11:44.714380   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.714404   93587 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:11:44.714459   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.731671   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.731883   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.731902   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:11:44.852943   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:11:44.853043   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:44.869576   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:44.869790   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:44.869817   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:11:44.979057   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:44.979089   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:11:44.979116   93587 ubuntu.go:177] setting up certificates
	I0522 18:11:44.979134   93587 provision.go:84] configureAuth start
	I0522 18:11:44.979199   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:44.994933   93587 provision.go:143] copyHostCerts
	I0522 18:11:44.994969   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995017   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:11:44.995033   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:11:44.995108   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:11:44.995224   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995252   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:11:44.995259   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:11:44.995322   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:11:44.995400   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995422   93587 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:11:44.995429   93587 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:11:44.995474   93587 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:11:44.995562   93587 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:11:45.135697   93587 provision.go:177] copyRemoteCerts
	I0522 18:11:45.135763   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:11:45.135818   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.152921   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.238902   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:11:45.238973   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:11:45.258885   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:11:45.258948   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0522 18:11:45.278444   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:11:45.278494   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:11:45.297780   93587 provision.go:87] duration metric: took 318.629986ms to configureAuth
	I0522 18:11:45.297808   93587 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:11:45.297962   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:45.298004   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.313749   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.313923   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.313939   93587 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:11:45.427468   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:11:45.427494   93587 ubuntu.go:71] root file system type: overlay
	I0522 18:11:45.427580   93587 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:11:45.427626   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.444225   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.444413   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.444506   93587 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:11:45.564594   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:11:45.564669   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.580720   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:11:45.580903   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32807 <nil> <nil>}
	I0522 18:11:45.580920   93587 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:11:45.695828   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:11:45.695857   93587 machine.go:97] duration metric: took 4.110841908s to provisionDockerMachine
	I0522 18:11:45.695867   93587 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:11:45.695877   93587 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:11:45.695924   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:11:45.695955   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.712232   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.795493   93587 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:11:45.798393   93587 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:11:45.798434   93587 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:11:45.798444   93587 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:11:45.798453   93587 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:11:45.798471   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:11:45.798511   93587 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:11:45.798590   93587 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:11:45.798602   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:11:45.798690   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:11:45.806167   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:45.826168   93587 start.go:296] duration metric: took 130.28741ms for postStartSetup
	I0522 18:11:45.826240   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:11:45.826284   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.842515   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.923712   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:11:45.927618   93587 fix.go:56] duration metric: took 4.643832098s for fixHost
	I0522 18:11:45.927656   93587 start.go:83] releasing machines lock for "ha-828033", held for 4.643887227s
	I0522 18:11:45.927713   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:11:45.944156   93587 ssh_runner.go:195] Run: cat /version.json
	I0522 18:11:45.944201   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.944235   93587 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:11:45.944288   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:45.962364   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:45.962780   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:46.042681   93587 ssh_runner.go:195] Run: systemctl --version
	I0522 18:11:46.109435   93587 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:11:46.113688   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:11:46.129549   93587 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:11:46.129616   93587 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:11:46.137374   93587 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:11:46.138397   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.138424   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.138550   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.152035   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:11:46.160068   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:11:46.168623   93587 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.168674   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:11:46.177246   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.185321   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:11:46.193307   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:11:46.201602   93587 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:11:46.209350   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:11:46.217593   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:11:46.225824   93587 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:11:46.234419   93587 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:11:46.241490   93587 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:11:46.248503   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.323097   93587 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:11:46.411392   93587 start.go:494] detecting cgroup driver to use...
	I0522 18:11:46.411434   93587 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:11:46.411494   93587 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:11:46.422471   93587 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:11:46.422535   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:11:46.433407   93587 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:11:46.449148   93587 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:11:46.452464   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:11:46.460126   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:11:46.477806   93587 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:11:46.581019   93587 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:11:46.682974   93587 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:11:46.683118   93587 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:11:46.699398   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:46.783890   93587 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:11:47.043450   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:11:47.053302   93587 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:11:47.063710   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.072923   93587 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:11:47.142683   93587 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:11:47.222920   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.298978   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:11:47.310891   93587 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:11:47.320183   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.395538   93587 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:11:47.457881   93587 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:11:47.457934   93587 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:11:47.461279   93587 start.go:562] Will wait 60s for crictl version
	I0522 18:11:47.461343   93587 ssh_runner.go:195] Run: which crictl
	I0522 18:11:47.464606   93587 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:11:47.495432   93587 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:11:47.495495   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.517256   93587 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:11:47.541495   93587 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:11:47.541571   93587 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:11:47.557260   93587 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:11:47.560496   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.570471   93587 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:11:47.570586   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:11:47.570631   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.587878   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.587899   93587 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:11:47.587950   93587 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:11:47.606514   93587 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:11:47.606541   93587 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:11:47.606558   93587 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:11:47.606687   93587 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:11:47.606735   93587 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:11:47.652790   93587 cni.go:84] Creating CNI manager for ""
	I0522 18:11:47.652807   93587 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:11:47.652824   93587 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:11:47.652857   93587 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:11:47.652974   93587 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:11:47.652992   93587 kube-vip.go:115] generating kube-vip config ...
	I0522 18:11:47.653024   93587 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:11:47.663570   93587 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:47.663661   93587 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:11:47.663702   93587 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:11:47.671164   93587 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:11:47.671218   93587 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:11:47.678433   93587 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:11:47.693280   93587 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:11:47.707810   93587 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:11:47.722391   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:11:47.737026   93587 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:11:47.739845   93587 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:11:47.748775   93587 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:11:47.823891   93587 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:11:47.835577   93587 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:11:47.835598   93587 certs.go:194] generating shared ca certs ...
	I0522 18:11:47.835613   93587 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:47.835758   93587 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:11:47.835842   93587 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:11:47.835862   93587 certs.go:256] generating profile certs ...
	I0522 18:11:47.835960   93587 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:11:47.835985   93587 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:11:47.836008   93587 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:11:48.121096   93587 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:11:48.121121   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121275   93587 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:11:48.121287   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.121352   93587 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:11:48.121491   93587 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:11:48.121607   93587 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:11:48.121622   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:11:48.121634   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:11:48.121647   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:11:48.121659   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:11:48.121671   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:11:48.121684   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:11:48.121695   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:11:48.121706   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:11:48.121761   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:11:48.121786   93587 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:11:48.121796   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:11:48.121824   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:11:48.121846   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:11:48.121868   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:11:48.121906   93587 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:11:48.121932   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.121947   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.121963   93587 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.122488   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:11:48.143159   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:11:48.162787   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:11:48.182506   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:11:48.201936   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:11:48.221464   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:11:48.240723   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:11:48.260323   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:11:48.279765   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:11:48.299293   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:11:48.318925   93587 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:11:48.338728   93587 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:11:48.353309   93587 ssh_runner.go:195] Run: openssl version
	I0522 18:11:48.358049   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:11:48.365885   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368779   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.368829   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:11:48.374835   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:11:48.382122   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:11:48.389749   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392543   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.392586   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:11:48.400682   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:11:48.407800   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:11:48.415568   93587 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418291   93587 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.418342   93587 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:11:48.424132   93587 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:11:48.431192   93587 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:11:48.433941   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:11:48.439661   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:11:48.445338   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:11:48.451065   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:11:48.456627   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:11:48.461988   93587 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:11:48.467384   93587 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:11:48.467494   93587 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:11:48.485081   93587 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:11:48.492968   93587 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:11:48.492987   93587 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:11:48.492994   93587 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:11:48.493030   93587 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:11:48.500158   93587 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:11:48.500524   93587 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.500622   93587 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:11:48.500860   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.501224   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.501415   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.501829   93587 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:11:48.502116   93587 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:11:48.509165   93587 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:11:48.509192   93587 kubeadm.go:591] duration metric: took 16.193394ms to restartPrimaryControlPlane
	I0522 18:11:48.509203   93587 kubeadm.go:393] duration metric: took 41.824441ms to StartCluster
	I0522 18:11:48.509229   93587 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.509281   93587 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.509984   93587 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:11:48.510194   93587 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:11:48.510219   93587 start.go:240] waiting for startup goroutines ...
	I0522 18:11:48.510231   93587 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:11:48.510288   93587 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:11:48.510308   93587 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:11:48.510350   93587 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	I0522 18:11:48.510358   93587 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	W0522 18:11:48.510362   93587 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:11:48.510372   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:11:48.510392   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.510671   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.510833   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.531981   93587 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:11:48.529656   93587 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:11:48.532267   93587 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:11:48.533374   93587 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.533470   93587 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:11:48.533514   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.533609   93587 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:11:48.533626   93587 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:11:48.533656   93587 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:11:48.533986   93587 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:11:48.549936   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.550918   93587 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:11:48.550941   93587 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:11:48.550989   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:11:48.567412   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:11:48.643338   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:11:48.658967   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.695623   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.695654   93587 retry.go:31] will retry after 143.566199ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.690257    1624 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:11:48.710095   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.710122   93587 retry.go:31] will retry after 196.09206ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.705109    1634 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.839382   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:48.889703   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.889737   93587 retry.go:31] will retry after 405.6758ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.885511    1647 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.906883   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:48.957678   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:48.957706   93587 retry.go:31] will retry after 481.984617ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:48.953118    1657 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.296239   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.346745   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.346776   93587 retry.go:31] will retry after 298.316645ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.341816    1668 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.439941   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.490892   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.490924   93587 retry.go:31] will retry after 365.174941ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.486412    1679 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.646180   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:49.695995   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.696026   93587 retry.go:31] will retry after 622.662088ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.691846    1690 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.856274   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:49.908213   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:49.908240   93587 retry.go:31] will retry after 465.598462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:49.903366    1700 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.319768   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:50.370352   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.370393   93587 retry.go:31] will retry after 1.153542566s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.365475    1711 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.374493   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:50.427342   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:50.427370   93587 retry.go:31] will retry after 1.760070779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:50.422724    1722 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.524500   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:51.576096   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:51.576127   93587 retry.go:31] will retry after 1.395298614s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:51.571038    1733 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.187677   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:52.238330   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.238363   93587 retry.go:31] will retry after 2.838643955s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:52.234300    1744 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:52.972468   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:53.024864   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:53.024894   93587 retry.go:31] will retry after 3.988192679s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:53.019220    1766 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.078985   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:55.254504   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:55.254547   93587 retry.go:31] will retry after 1.898473733s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:55.248712    2256 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.013394   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:11:57.065110   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.065143   93587 retry.go:31] will retry after 3.026639765s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.060012    2269 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.153313   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:11:57.205183   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:11:57.205216   93587 retry.go:31] will retry after 4.512900176s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:11:57.200376    2280 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.093267   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:00.144874   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:00.144907   93587 retry.go:31] will retry after 4.624822439s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:00.139835    2302 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.718976   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:01.770260   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:01.770289   93587 retry.go:31] will retry after 6.597322484s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:01.765216    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.770613   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:04.821736   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:04.821765   93587 retry.go:31] will retry after 6.276558674s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:04.817355    2336 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.369690   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:08.421665   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:08.421695   93587 retry.go:31] will retry after 4.88361876s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:08.416294    2376 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.099397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:11.150176   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:11.150214   93587 retry.go:31] will retry after 14.618513106s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:11.145873    2390 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.307405   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:13.358292   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:13.358325   93587 retry.go:31] will retry after 11.702428572s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:13.353152    2412 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.064329   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:25.116230   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.116269   93587 retry.go:31] will retry after 20.635119238s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.111099    2515 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.768934   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:25.819335   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:25.819366   93587 retry.go:31] will retry after 22.551209597s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:25.814859    2526 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.755397   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:12:45.807295   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:45.807335   93587 retry.go:31] will retry after 48.223563966s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:45.802125    2720 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.371303   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:12:48.422526   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:12:48.422554   93587 retry.go:31] will retry after 21.925283254s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:12:48.417694    2751 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:13:10.348911   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:13:10.401430   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:10.401550   93587 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:10.396741    2806 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.031408   93587 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:13:34.084103   93587 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:13:34.084199   93587 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:13:34.079345    3011 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:13:34.087605   93587 out.go:177] * Enabled addons: 
	I0522 18:13:34.092136   93587 addons.go:505] duration metric: took 1m45.581904576s for enable addons: enabled=[]
	I0522 18:13:34.092168   93587 start.go:245] waiting for cluster config update ...
	I0522 18:13:34.092175   93587 start.go:254] writing updated cluster config ...
	I0522 18:13:34.093767   93587 out.go:177] 
	I0522 18:13:34.094950   93587 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:13:34.095010   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.096476   93587 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:13:34.097816   93587 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:13:34.098828   93587 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:13:34.099818   93587 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:13:34.099834   93587 cache.go:56] Caching tarball of preloaded images
	I0522 18:13:34.099879   93587 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:13:34.099916   93587 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:13:34.099930   93587 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:13:34.100028   93587 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:13:34.116605   93587 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:13:34.116636   93587 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:13:34.116654   93587 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:13:34.116685   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:34.116739   93587 start.go:364] duration metric: took 36.742µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:34.116754   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:34.116759   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:34.116975   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.131815   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:13:34.131835   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:34.133519   93587 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:13:34.134577   93587 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:13:34.386505   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:34.403758   93587 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:13:34.404176   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:34.421199   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:13:34.421255   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:34.437668   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:13:34.438642   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.438697   93587 retry.go:31] will retry after 159.621723ms: ssh: handshake failed: read tcp 127.0.0.1:43254->127.0.0.1:32812: read: connection reset by peer
	W0522 18:13:34.599398   93587 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.599427   93587 retry.go:31] will retry after 217.688969ms: ssh: handshake failed: read tcp 127.0.0.1:43264->127.0.0.1:32812: read: connection reset by peer
	I0522 18:13:34.948280   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:13:34.952728   93587 fix.go:56] duration metric: took 835.959949ms for fixHost
	I0522 18:13:34.952759   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 836.005567ms
	W0522 18:13:34.952776   93587 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:13:34.952870   93587 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:13:34.952882   93587 start.go:728] Will try again in 5 seconds ...
	I0522 18:13:39.953931   93587 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:13:39.954069   93587 start.go:364] duration metric: took 66.237µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:13:39.954098   93587 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:13:39.954106   93587 fix.go:54] fixHost starting: m02
	I0522 18:13:39.954430   93587 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:13:39.971326   93587 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:13:39.971351   93587 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:13:39.973352   93587 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:13:39.974806   93587 machine.go:94] provisionDockerMachine start ...
	I0522 18:13:39.974895   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:39.990164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:39.990366   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:39.990382   93587 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:13:40.106411   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.106441   93587 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:13:40.106497   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.123164   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.123396   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.123412   93587 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:13:40.245387   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:13:40.245458   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:13:40.262355   93587 main.go:141] libmachine: Using SSH client type: native
	I0522 18:13:40.262539   93587 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32812 <nil> <nil>}
	I0522 18:13:40.262563   93587 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:13:40.375115   93587 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:13:40.375140   93587 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:13:40.375156   93587 ubuntu.go:177] setting up certificates
	I0522 18:13:40.375167   93587 provision.go:84] configureAuth start
	I0522 18:13:40.375212   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.390878   93587 provision.go:87] duration metric: took 15.702592ms to configureAuth
	W0522 18:13:40.390903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.390928   93587 retry.go:31] will retry after 70.356µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.392042   93587 provision.go:84] configureAuth start
	I0522 18:13:40.392097   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.408007   93587 provision.go:87] duration metric: took 15.947883ms to configureAuth
	W0522 18:13:40.408024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.408044   93587 retry.go:31] will retry after 137.47µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.409151   93587 provision.go:84] configureAuth start
	I0522 18:13:40.409201   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.423891   93587 provision.go:87] duration metric: took 14.725235ms to configureAuth
	W0522 18:13:40.423909   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.423925   93587 retry.go:31] will retry after 262.374µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.425034   93587 provision.go:84] configureAuth start
	I0522 18:13:40.425086   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.439293   93587 provision.go:87] duration metric: took 14.241319ms to configureAuth
	W0522 18:13:40.439314   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.439330   93587 retry.go:31] will retry after 298.899µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.440439   93587 provision.go:84] configureAuth start
	I0522 18:13:40.440498   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.455314   93587 provision.go:87] duration metric: took 14.857395ms to configureAuth
	W0522 18:13:40.455331   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.455346   93587 retry.go:31] will retry after 425.458µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.456456   93587 provision.go:84] configureAuth start
	I0522 18:13:40.456517   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.473826   93587 provision.go:87] duration metric: took 17.346003ms to configureAuth
	W0522 18:13:40.473848   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.473864   93587 retry.go:31] will retry after 794.432µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.474977   93587 provision.go:84] configureAuth start
	I0522 18:13:40.475045   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.491066   93587 provision.go:87] duration metric: took 16.070525ms to configureAuth
	W0522 18:13:40.491088   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.491107   93587 retry.go:31] will retry after 1.614344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.493281   93587 provision.go:84] configureAuth start
	I0522 18:13:40.493345   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.508551   93587 provision.go:87] duration metric: took 15.254686ms to configureAuth
	W0522 18:13:40.508569   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.508587   93587 retry.go:31] will retry after 998.104µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.509712   93587 provision.go:84] configureAuth start
	I0522 18:13:40.509790   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.525006   93587 provision.go:87] duration metric: took 15.263842ms to configureAuth
	W0522 18:13:40.525024   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.525042   93587 retry.go:31] will retry after 3.338034ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.529222   93587 provision.go:84] configureAuth start
	I0522 18:13:40.529282   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.544880   93587 provision.go:87] duration metric: took 15.639211ms to configureAuth
	W0522 18:13:40.544898   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.544922   93587 retry.go:31] will retry after 3.40783ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.549101   93587 provision.go:84] configureAuth start
	I0522 18:13:40.549153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.564670   93587 provision.go:87] duration metric: took 15.552453ms to configureAuth
	W0522 18:13:40.564691   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.564707   93587 retry.go:31] will retry after 7.302355ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.572891   93587 provision.go:84] configureAuth start
	I0522 18:13:40.572957   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.588884   93587 provision.go:87] duration metric: took 15.972307ms to configureAuth
	W0522 18:13:40.588903   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.588921   93587 retry.go:31] will retry after 5.301531ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.595100   93587 provision.go:84] configureAuth start
	I0522 18:13:40.595153   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.610191   93587 provision.go:87] duration metric: took 15.074227ms to configureAuth
	W0522 18:13:40.610211   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.610230   93587 retry.go:31] will retry after 11.026949ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.621370   93587 provision.go:84] configureAuth start
	I0522 18:13:40.621446   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.636327   93587 provision.go:87] duration metric: took 14.934708ms to configureAuth
	W0522 18:13:40.636340   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.636356   93587 retry.go:31] will retry after 25.960513ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.662569   93587 provision.go:84] configureAuth start
	I0522 18:13:40.662637   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.677809   93587 provision.go:87] duration metric: took 15.220921ms to configureAuth
	W0522 18:13:40.677824   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.677840   93587 retry.go:31] will retry after 32.75774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.711021   93587 provision.go:84] configureAuth start
	I0522 18:13:40.711093   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.726493   93587 provision.go:87] duration metric: took 15.45214ms to configureAuth
	W0522 18:13:40.726508   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.726524   93587 retry.go:31] will retry after 36.849589ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.763725   93587 provision.go:84] configureAuth start
	I0522 18:13:40.763797   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.779769   93587 provision.go:87] duration metric: took 16.019178ms to configureAuth
	W0522 18:13:40.779786   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.779806   93587 retry.go:31] will retry after 56.725665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.837004   93587 provision.go:84] configureAuth start
	I0522 18:13:40.837114   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.852417   93587 provision.go:87] duration metric: took 15.386685ms to configureAuth
	W0522 18:13:40.852435   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.852451   93587 retry.go:31] will retry after 111.712266ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.964732   93587 provision.go:84] configureAuth start
	I0522 18:13:40.964841   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:40.981335   93587 provision.go:87] duration metric: took 16.561934ms to configureAuth
	W0522 18:13:40.981354   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:40.981372   93587 retry.go:31] will retry after 119.589549ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.101655   93587 provision.go:84] configureAuth start
	I0522 18:13:41.101767   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.118304   93587 provision.go:87] duration metric: took 16.624114ms to configureAuth
	W0522 18:13:41.118332   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.118349   93587 retry.go:31] will retry after 172.20415ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.290646   93587 provision.go:84] configureAuth start
	I0522 18:13:41.290734   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.306781   93587 provision.go:87] duration metric: took 16.099389ms to configureAuth
	W0522 18:13:41.306799   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.306815   93587 retry.go:31] will retry after 467.479675ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.774386   93587 provision.go:84] configureAuth start
	I0522 18:13:41.774495   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:41.790035   93587 provision.go:87] duration metric: took 15.610421ms to configureAuth
	W0522 18:13:41.790054   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:41.790070   93587 retry.go:31] will retry after 663.257318ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.453817   93587 provision.go:84] configureAuth start
	I0522 18:13:42.453935   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.473961   93587 provision.go:87] duration metric: took 20.113537ms to configureAuth
	W0522 18:13:42.473982   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.473999   93587 retry.go:31] will retry after 453.336791ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.928400   93587 provision.go:84] configureAuth start
	I0522 18:13:42.928480   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:42.944835   93587 provision.go:87] duration metric: took 16.404983ms to configureAuth
	W0522 18:13:42.944858   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:42.944874   93587 retry.go:31] will retry after 1.661774658s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.607615   93587 provision.go:84] configureAuth start
	I0522 18:13:44.607723   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:44.623466   93587 provision.go:87] duration metric: took 15.817599ms to configureAuth
	W0522 18:13:44.623490   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:44.623506   93587 retry.go:31] will retry after 2.087899686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.711969   93587 provision.go:84] configureAuth start
	I0522 18:13:46.712058   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:46.728600   93587 provision.go:87] duration metric: took 16.596208ms to configureAuth
	W0522 18:13:46.728620   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:46.728636   93587 retry.go:31] will retry after 1.751255493s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.480034   93587 provision.go:84] configureAuth start
	I0522 18:13:48.480138   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:48.495909   93587 provision.go:87] duration metric: took 15.845589ms to configureAuth
	W0522 18:13:48.495927   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:48.495944   93587 retry.go:31] will retry after 3.216449309s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.712476   93587 provision.go:84] configureAuth start
	I0522 18:13:51.712600   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:51.728675   93587 provision.go:87] duration metric: took 16.149731ms to configureAuth
	W0522 18:13:51.728694   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:51.728713   93587 retry.go:31] will retry after 4.442037166s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.171311   93587 provision.go:84] configureAuth start
	I0522 18:13:56.171390   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:13:56.188514   93587 provision.go:87] duration metric: took 17.174931ms to configureAuth
	W0522 18:13:56.188532   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:13:56.188548   93587 retry.go:31] will retry after 12.471520302s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.660614   93587 provision.go:84] configureAuth start
	I0522 18:14:08.660710   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:08.677166   93587 provision.go:87] duration metric: took 16.519042ms to configureAuth
	W0522 18:14:08.677185   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:08.677201   93587 retry.go:31] will retry after 10.952874884s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.630561   93587 provision.go:84] configureAuth start
	I0522 18:14:19.630655   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:19.646798   93587 provision.go:87] duration metric: took 16.206763ms to configureAuth
	W0522 18:14:19.646816   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:19.646833   93587 retry.go:31] will retry after 24.173560862s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.822465   93587 provision.go:84] configureAuth start
	I0522 18:14:43.822544   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:14:43.838993   93587 provision.go:87] duration metric: took 16.502247ms to configureAuth
	W0522 18:14:43.839013   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:14:43.839034   93587 retry.go:31] will retry after 18.866878171s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.707256   93587 provision.go:84] configureAuth start
	I0522 18:15:02.707363   93587 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:15:02.723837   93587 provision.go:87] duration metric: took 16.544569ms to configureAuth
	W0522 18:15:02.723855   93587 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723871   93587 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.723880   93587 machine.go:97] duration metric: took 1m22.749059211s to provisionDockerMachine
	I0522 18:15:02.723935   93587 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:02.723966   93587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:15:02.739583   93587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:15:02.819663   93587 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:02.823565   93587 fix.go:56] duration metric: took 1m22.869456878s for fixHost
	I0522 18:15:02.823585   93587 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m22.869501248s
	W0522 18:15:02.823659   93587 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:15:02.826139   93587 out.go:177] 
	W0522 18:15:02.827395   93587 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:15:02.827414   93587 out.go:239] * 
	W0522 18:15:02.828270   93587 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:15:02.829647   93587 out.go:177] 
	
	
	==> Docker <==
	May 22 18:13:30 ha-828033 dockerd[970]: time="2024-05-22T18:13:30.061949319Z" level=info msg="ignoring event" container=084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:14:20 ha-828033 dockerd[970]: time="2024-05-22T18:14:20.737767317Z" level=info msg="ignoring event" container=b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:00 ha-828033 dockerd[970]: time="2024-05-22T18:15:00.062475870Z" level=info msg="ignoring event" container=9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:03 ha-828033 dockerd[970]: 2024/05/22 18:15:03 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:15:05 ha-828033 dockerd[970]: 2024/05/22 18:15:05 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9b9d5449cf888       91be940803172                                                                                         7 seconds ago       Exited              kube-apiserver            5                   2e74aa10cbe9d       kube-apiserver-ha-828033
	b7e48a9d0a0d6       25a1387cdab82                                                                                         57 seconds ago      Exited              kube-controller-manager   4                   8000bf1a7fc46       kube-controller-manager-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         3 minutes ago       Running             kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         3 minutes ago       Running             kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         21 minutes ago      Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              21 minutes ago      Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         21 minutes ago      Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         21 minutes ago      Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     21 minutes ago      Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	f457f32fdd43d       a52dc94f0a912                                                                                         22 minutes ago      Exited              kube-scheduler            0                   4d7edccdc49b2       kube-scheduler-ha-828033
	3a9c3dbadc741       3861cfcd7c04c                                                                                         22 minutes ago      Exited              etcd                      0                   ca6a020652c53       etcd-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:07.041211    4026 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [3a9c3dbadc74] <==
	{"level":"info","ts":"2024-05-22T17:53:06.946946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.94698Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T17:53:06.947249Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947244Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947459Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T17:53:06.947403Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.947503Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T17:53:06.949353Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T17:53:06.949734Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:03:06.972392Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":969}
	{"level":"info","ts":"2024-05-22T18:03:06.979298Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":969,"took":"6.628211ms","hash":385253431,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":2453504,"current-db-size-in-use":"2.5 MB"}
	{"level":"info","ts":"2024-05-22T18:03:06.979338Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":385253431,"revision":969,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:08:06.976542Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1510}
	{"level":"info","ts":"2024-05-22T18:08:06.979233Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1510,"took":"2.403075ms","hash":4270919024,"current-db-size-bytes":2453504,"current-db-size":"2.5 MB","current-db-size-in-use-bytes":1830912,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2024-05-22T18:08:06.979278Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4270919024,"revision":1510,"compact-revision":969}
	{"level":"info","ts":"2024-05-22T18:11:30.644098Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:11:30.644166Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:11:30.644253Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.644363Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.653966Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:11:30.654011Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:11:30.654081Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:11:30.65579Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655929Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:30.655974Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.960376Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:11:54.96051Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.960553Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:11:54.963318Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:11:54.963634Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:11:54.963677Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:11:54.963823Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963876Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.963887Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:11:54.964191Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:15:07 up 57 min,  0 users,  load average: 0.20, 0.33, 0.42
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [9b9d5449cf88] <==
	I0522 18:15:00.047629       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:15:00.048450       1 server.go:148] Version: v1.30.1
	I0522 18:15:00.048494       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:15:00.048918       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [b7e48a9d0a0d] <==
	I0522 18:14:10.275710       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:14:10.706883       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:14:10.706906       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:14:10.708232       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:14:10.708241       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:14:10.708399       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:14:10.708438       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:14:20.710272       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:28.926033       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:31.465895       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:31.465961       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.214119       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kube-scheduler [f457f32fdd43] <==
	E0522 17:53:09.148499       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:09.146997       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0522 17:53:09.148518       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0522 17:53:09.148341       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0522 17:53:09.148545       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0522 17:53:09.148892       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:09.148932       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.084722       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.084765       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.112919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 17:53:10.112952       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 17:53:10.175208       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.175254       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.194289       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 17:53:10.194329       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 17:53:10.200134       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 17:53:10.200161       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 17:53:10.203122       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0522 17:53:10.203156       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0522 17:53:10.344173       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 17:53:10.344204       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 17:53:12.774271       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:11:30.556935       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0522 18:11:30.557296       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0522 18:11:30.557341       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:14:52 ha-828033 kubelet[1423]: E0522 18:14:52.195689    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.049137    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:14:53 ha-828033 kubelet[1423]: I0522 18:14:53.918455    1423 scope.go:117] "RemoveContainer" containerID="b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7"
	May 22 18:14:53 ha-828033 kubelet[1423]: E0522 18:14:53.918789    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263739    1423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263702    1423 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e213dd1784ea  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,LastTimestamp:2024-05-22 18:11:47.946431722 +0000 UTC m=+0.108066753,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:14:55 ha-828033 kubelet[1423]: E0522 18:14:55.263796    1423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:14:57 ha-828033 kubelet[1423]: E0522 18:14:57.978048    1423 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:14:58 ha-828033 kubelet[1423]: W0522 18:14:58.335640    1423 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:58 ha-828033 kubelet[1423]: E0522 18:14:58.335719    1423 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:14:59 ha-828033 kubelet[1423]: I0522 18:14:59.918050    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.644461    1423 scope.go:117] "RemoveContainer" containerID="084c0fca8d675868754a27cbabc92a0e6a5aa220a66ebc45d7817c0d4e99bf94"
	May 22 18:15:00 ha-828033 kubelet[1423]: I0522 18:15:00.645423    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:00 ha-828033 kubelet[1423]: E0522 18:15:00.645909    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:01 ha-828033 kubelet[1423]: I0522 18:15:01.656460    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:01 ha-828033 kubelet[1423]: E0522 18:15:01.656842    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.264674    1423 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:15:02 ha-828033 kubelet[1423]: I0522 18:15:02.664649    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:02 ha-828033 kubelet[1423]: E0522 18:15:02.665058    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:15:04 ha-828033 kubelet[1423]: E0522 18:15:04.479543    1423 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:15:04 ha-828033 kubelet[1423]: E0522 18:15:04.479560    1423 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:15:05 ha-828033 kubelet[1423]: I0522 18:15:05.917780    1423 scope.go:117] "RemoveContainer" containerID="b7e48a9d0a0d6c130eca5615190effb0da597a359fee18222140fd51cc4163f7"
	May 22 18:15:05 ha-828033 kubelet[1423]: E0522 18:15:05.918131    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:15:06 ha-828033 kubelet[1423]: I0522 18:15:06.441292    1423 scope.go:117] "RemoveContainer" containerID="9b9d5449cf8883ae2df6700755c33d5762157674a7a0aa963a2d1d01f90afeb1"
	May 22 18:15:06 ha-828033 kubelet[1423]: E0522 18:15:06.441800    1423 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (240.719664ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (2.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-828033 stop -v=7 --alsologtostderr: (2.269994347s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr: exit status 7 (71.315984ms)

                                                
                                                
-- stdout --
	ha-828033
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-828033-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:15:09.907338  101766 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:09.907433  101766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:09.907441  101766 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:09.907445  101766 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:09.907663  101766 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:09.907817  101766 out.go:298] Setting JSON to false
	I0522 18:15:09.907839  101766 mustload.go:65] Loading cluster: ha-828033
	I0522 18:15:09.907938  101766 notify.go:220] Checking for updates...
	I0522 18:15:09.908147  101766 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:09.908161  101766 status.go:255] checking status of ha-828033 ...
	I0522 18:15:09.908511  101766 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:09.924858  101766 status.go:330] ha-828033 host status = "Stopped" (err=<nil>)
	I0522 18:15:09.924884  101766 status.go:343] host is not running, skipping remaining checks
	I0522 18:15:09.924893  101766 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:15:09.924920  101766 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:15:09.925163  101766 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:15:09.940278  101766 status.go:330] ha-828033-m02 host status = "Stopped" (err=<nil>)
	I0522 18:15:09.940292  101766 status.go:343] host is not running, skipping remaining checks
	I0522 18:15:09.940297  101766 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:549: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-828033 status -v=7 --alsologtostderr": ha-828033
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-828033-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:11:41.545152719Z",
	            "FinishedAt": "2024-05-22T18:15:09.116884079Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d0c98ea9130cbb800c462fe8803bee586edca8539288200e46ac88b3b024b2",
	            "SandboxKey": "/var/run/docker/netns/30d0c98ea913",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 7 (56.811966ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "ha-828033" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/StopCluster (2.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (225.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-828033 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker
E0522 18:16:55.310407   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:17:24.838840   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p ha-828033 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker: exit status 80 (3m43.952169627s)

                                                
                                                
-- stdout --
	* [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "ha-828033" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	* Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "ha-828033-m02" ...
	* Updating the running docker "ha-828033-m02" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:15:10.052711  101831 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:10.052945  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.052953  101831 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:10.052957  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.053112  101831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:10.053580  101831 out.go:298] Setting JSON to false
	I0522 18:15:10.054415  101831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3454,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:15:10.054471  101831 start.go:139] virtualization: kvm guest
	I0522 18:15:10.056675  101831 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:15:10.058040  101831 notify.go:220] Checking for updates...
	I0522 18:15:10.058046  101831 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:15:10.059343  101831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:15:10.060677  101831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:10.061800  101831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:15:10.062877  101831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:15:10.064091  101831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:15:10.065687  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:10.066119  101831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:15:10.086670  101831 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:15:10.086771  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.130648  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.122350286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.130754  101831 docker.go:295] overlay module found
	I0522 18:15:10.132447  101831 out.go:177] * Using the docker driver based on existing profile
	I0522 18:15:10.133511  101831 start.go:297] selected driver: docker
	I0522 18:15:10.133528  101831 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.133615  101831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:15:10.133693  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.178797  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.170730392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.179465  101831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:15:10.179495  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:10.179504  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:10.179557  101831 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.181838  101831 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:15:10.182862  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:15:10.184066  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:15:10.185142  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:10.185165  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:15:10.185172  101831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:15:10.185187  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:15:10.185275  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:15:10.185286  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:15:10.185372  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.199839  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:15:10.199866  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:15:10.199888  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:15:10.199920  101831 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:15:10.199975  101831 start.go:364] duration metric: took 36.63µs to acquireMachinesLock for "ha-828033"
	I0522 18:15:10.199991  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:15:10.200001  101831 fix.go:54] fixHost starting: 
	I0522 18:15:10.200212  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.216528  101831 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:15:10.216569  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:15:10.218337  101831 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:15:10.219502  101831 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:15:10.489901  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.507723  101831 kic.go:430] container "ha-828033" state is running.
	I0522 18:15:10.508126  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:10.527137  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.527348  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:15:10.527408  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:10.544792  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:10.545081  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:10.545103  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:15:10.545690  101831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:32817: read: connection reset by peer
	I0522 18:15:13.662862  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.662903  101831 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:15:13.662964  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.679655  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.679834  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.679848  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:15:13.801105  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.801184  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.817648  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.817828  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.817845  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:15:13.931153  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:13.931179  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:15:13.931217  101831 ubuntu.go:177] setting up certificates
	I0522 18:15:13.931238  101831 provision.go:84] configureAuth start
	I0522 18:15:13.931311  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:13.947388  101831 provision.go:143] copyHostCerts
	I0522 18:15:13.947420  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947445  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:15:13.947460  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947524  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:15:13.947607  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947625  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:15:13.947628  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947654  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:15:13.947696  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947711  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:15:13.947717  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947737  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:15:13.947784  101831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:15:14.398357  101831 provision.go:177] copyRemoteCerts
	I0522 18:15:14.398411  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:15:14.398442  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.414166  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:14.499249  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:15:14.499326  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:15:14.520994  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:15:14.521050  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 18:15:14.540775  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:15:14.540816  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 18:15:14.560240  101831 provision.go:87] duration metric: took 628.988417ms to configureAuth
	I0522 18:15:14.560262  101831 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:15:14.560422  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:14.560469  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.576177  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.576336  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.576348  101831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:15:14.687318  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:15:14.687343  101831 ubuntu.go:71] root file system type: overlay
	I0522 18:15:14.687455  101831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:15:14.687517  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.704102  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.704323  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.704424  101831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:15:14.825449  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:15:14.825531  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.841507  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.841715  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.841741  101831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:15:14.955461  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:14.955484  101831 machine.go:97] duration metric: took 4.428121798s to provisionDockerMachine
	I0522 18:15:14.955497  101831 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:15:14.955511  101831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:15:14.955559  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:15:14.955599  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.970693  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.055854  101831 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:15:15.058722  101831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:15:15.058760  101831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:15:15.058771  101831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:15:15.058780  101831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:15:15.058789  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:15:15.058832  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:15:15.058903  101831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:15:15.058914  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:15:15.058993  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:15:15.066158  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:15.086000  101831 start.go:296] duration metric: took 130.491ms for postStartSetup
	I0522 18:15:15.086056  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:15.086093  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.101977  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.183666  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:15.187576  101831 fix.go:56] duration metric: took 4.987575013s for fixHost
	I0522 18:15:15.187597  101831 start.go:83] releasing machines lock for "ha-828033", held for 4.987611005s
	I0522 18:15:15.187662  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:15.203730  101831 ssh_runner.go:195] Run: cat /version.json
	I0522 18:15:15.203784  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.203832  101831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:15:15.203905  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.219620  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.220317  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.298438  101831 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:15.369455  101831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:15:15.373670  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:15:15.389963  101831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:15:15.390037  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:15:15.397635  101831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:15:15.397661  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.397689  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.397785  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.411498  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:15:15.419815  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:15:15.428116  101831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.428162  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:15:15.436218  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.444432  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:15:15.452463  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.460889  101831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:15:15.468598  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:15:15.476986  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:15:15.485179  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:15:15.493301  101831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:15:15.500194  101831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:15:15.506903  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:15.578809  101831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:15:15.647535  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.647580  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.647625  101831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:15:15.659341  101831 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:15:15.659408  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:15:15.670447  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.687181  101831 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:15:15.690280  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:15:15.698889  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:15:15.716155  101831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:15:15.849757  101831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:15:15.927002  101831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.927199  101831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:15:15.958682  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.035955  101831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:15:16.309267  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:15:16.319069  101831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:15:16.329406  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.338954  101831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:15:16.411316  101831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:15:16.482185  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.558123  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:15:16.569903  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.579592  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.654464  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:15:16.713660  101831 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:15:16.713739  101831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:15:16.717169  101831 start.go:562] Will wait 60s for crictl version
	I0522 18:15:16.717224  101831 ssh_runner.go:195] Run: which crictl
	I0522 18:15:16.720182  101831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:15:16.750802  101831 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:15:16.750855  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.772501  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.795663  101831 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:15:16.795751  101831 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:15:16.811580  101831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:15:16.814850  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:16.824839  101831 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:15:16.824958  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:16.825025  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.842616  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.842633  101831 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:15:16.842688  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.859091  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.859115  101831 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:15:16.859131  101831 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:15:16.859251  101831 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:15:16.859326  101831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:15:16.902852  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:16.902868  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:16.902882  101831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:15:16.902904  101831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:15:16.903073  101831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:15:16.903091  101831 kube-vip.go:115] generating kube-vip config ...
	I0522 18:15:16.903133  101831 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:15:16.913846  101831 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:16.913951  101831 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:15:16.914004  101831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:15:16.921502  101831 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:15:16.921564  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:15:16.928993  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:15:16.944153  101831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:15:16.959523  101831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:15:16.974202  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:15:16.988963  101831 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:15:16.991795  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:17.000800  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:17.079221  101831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:15:17.090798  101831 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:15:17.090820  101831 certs.go:194] generating shared ca certs ...
	I0522 18:15:17.090844  101831 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.090965  101831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:15:17.091002  101831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:15:17.091008  101831 certs.go:256] generating profile certs ...
	I0522 18:15:17.091078  101831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:15:17.091129  101831 certs.go:616] failed to parse cert file /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: x509: cannot parse IP address of length 0
	I0522 18:15:17.091199  101831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:15:17.091213  101831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:15:17.140524  101831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:15:17.140548  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140659  101831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:15:17.140670  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140730  101831 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:15:17.140925  101831 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:15:17.141101  101831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:15:17.141119  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:15:17.141133  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:15:17.141147  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:15:17.141170  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:15:17.141187  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:15:17.141204  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:15:17.141219  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:15:17.141242  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:15:17.141303  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:15:17.141346  101831 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:15:17.141359  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:15:17.141388  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:15:17.141417  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:15:17.141446  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:15:17.141496  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:17.141532  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.141552  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.141573  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.142334  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:15:17.168748  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:15:17.251949  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:15:17.279089  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:15:17.360292  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:15:17.382285  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:15:17.402361  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:15:17.422080  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:15:17.441696  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:15:17.461724  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:15:17.481252  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:15:17.500617  101831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:15:17.515028  101831 ssh_runner.go:195] Run: openssl version
	I0522 18:15:17.519598  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:15:17.527181  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530162  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530202  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.535963  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:15:17.543306  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:15:17.551068  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553913  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553960  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.559966  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:15:17.567478  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:15:17.575235  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578146  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578200  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.584135  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:15:17.591800  101831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:15:17.594551  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:15:17.600342  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:15:17.606283  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:15:17.611975  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:15:17.617679  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:15:17.623211  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:15:17.628747  101831 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:17.628861  101831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:15:17.645553  101831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:15:17.653137  101831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:15:17.653154  101831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:15:17.653158  101831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:15:17.653194  101831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:15:17.660437  101831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:17.660808  101831 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.660901  101831 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:15:17.661141  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.661490  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.661685  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.662092  101831 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:15:17.662244  101831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:15:17.669585  101831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:15:17.669601  101831 kubeadm.go:591] duration metric: took 16.438601ms to restartPrimaryControlPlane
	I0522 18:15:17.669608  101831 kubeadm.go:393] duration metric: took 40.865584ms to StartCluster
	I0522 18:15:17.669620  101831 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.669675  101831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.670178  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.670340  101831 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:15:17.670358  101831 start.go:240] waiting for startup goroutines ...
	I0522 18:15:17.670369  101831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:15:17.670406  101831 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:15:17.670424  101831 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:15:17.670437  101831 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	W0522 18:15:17.670444  101831 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:15:17.670452  101831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 18:15:17.670468  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.670519  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:17.670698  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.670784  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.689774  101831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:15:17.689555  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.691107  101831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.691126  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:15:17.691169  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.691305  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.691526  101831 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:15:17.691538  101831 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:15:17.691559  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.691847  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.710078  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.710513  101831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:17.710529  101831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:15:17.710565  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.726905  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.803514  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.818704  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:17.855350  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.855404  101831 retry.go:31] will retry after 232.813174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:17.869892  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.869918  101831 retry.go:31] will retry after 317.212878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.089255  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.139447  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.139480  101831 retry.go:31] will retry after 388.464948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.187648  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:18.237073  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.237097  101831 retry.go:31] will retry after 286.046895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.523727  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:18.528673  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.578085  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.578120  101831 retry.go:31] will retry after 730.017926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:18.580563  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.580590  101831 retry.go:31] will retry after 575.328536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.156346  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:19.207853  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.207882  101831 retry.go:31] will retry after 904.065015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.309074  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:19.360363  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.360398  101831 retry.go:31] will retry after 668.946527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.030373  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:20.081266  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.081297  101831 retry.go:31] will retry after 1.581516451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.112442  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:20.162392  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.162423  101831 retry.go:31] will retry after 799.963515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.962767  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:21.014221  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.014258  101831 retry.go:31] will retry after 2.627281568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.663009  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:21.716311  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.716340  101831 retry.go:31] will retry after 973.454643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.690502  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:22.742767  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.742794  101831 retry.go:31] will retry after 3.340789148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.641773  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:23.775204  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.775240  101831 retry.go:31] will retry after 2.671895107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.083777  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:26.134578  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.134608  101831 retry.go:31] will retry after 4.298864045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.448092  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:26.499632  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.499662  101831 retry.go:31] will retry after 5.525229223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.434210  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:30.485401  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.485428  101831 retry.go:31] will retry after 4.916959612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.025957  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:32.076991  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.077021  101831 retry.go:31] will retry after 7.245842793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.402632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:35.454254  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.454282  101831 retry.go:31] will retry after 10.414070295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.324207  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:39.375910  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.375942  101831 retry.go:31] will retry after 9.156494241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.868576  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:45.920031  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.920063  101831 retry.go:31] will retry after 14.404576525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.532789  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:48.585261  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.585294  101831 retry.go:31] will retry after 17.974490677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.325688  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:00.377854  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.377897  101831 retry.go:31] will retry after 11.577079387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.561241  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:06.612860  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.612894  101831 retry.go:31] will retry after 14.583164714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:11.956632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:12.008606  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:12.008639  101831 retry.go:31] will retry after 46.302827634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.196878  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:21.247130  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.247161  101831 retry.go:31] will retry after 25.952174169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:47.199672  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:47.251576  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:47.251667  101831 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.312157  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:58.364469  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:58.364578  101831 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.366416  101831 out.go:177] * Enabled addons: 
	I0522 18:16:58.367516  101831 addons.go:505] duration metric: took 1m40.697149813s for enable addons: enabled=[]
	I0522 18:16:58.367546  101831 start.go:245] waiting for cluster config update ...
	I0522 18:16:58.367558  101831 start.go:254] writing updated cluster config ...
	I0522 18:16:58.369066  101831 out.go:177] 
	I0522 18:16:58.370289  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:16:58.370344  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.371848  101831 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:16:58.373273  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:16:58.374502  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:16:58.375701  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:16:58.375722  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:16:58.375727  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:16:58.375816  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:16:58.375840  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:16:58.375916  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.392272  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:16:58.392290  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:16:58.392305  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:16:58.392330  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:16:58.392384  101831 start.go:364] duration metric: took 37.403µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:16:58.392400  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:16:58.392405  101831 fix.go:54] fixHost starting: m02
	I0522 18:16:58.392601  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.408748  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:16:58.408768  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:16:58.410677  101831 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:16:58.411822  101831 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:16:58.662201  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.678298  101831 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:16:58.678749  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:16:58.695431  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:16:58.695483  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:16:58.710353  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:16:58.711129  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.711158  101831 retry.go:31] will retry after 162.419442ms: ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	W0522 18:16:58.874922  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.874949  101831 retry.go:31] will retry after 374.487623ms: ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:59.335651  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:16:59.339485  101831 fix.go:56] duration metric: took 947.0745ms for fixHost
	I0522 18:16:59.339510  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 947.115875ms
	W0522 18:16:59.339525  101831 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:16:59.339587  101831 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:16:59.339604  101831 start.go:728] Will try again in 5 seconds ...
	I0522 18:17:04.343396  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:17:04.343479  101831 start.go:364] duration metric: took 52.078µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:17:04.343499  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:17:04.343506  101831 fix.go:54] fixHost starting: m02
	I0522 18:17:04.343719  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:17:04.359537  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:17:04.359560  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:17:04.361525  101831 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:17:04.362763  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:17:04.362823  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.378286  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.378448  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.378458  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:17:04.490382  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.490408  101831 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:17:04.490471  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.506007  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.506177  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.506191  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:17:04.628978  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.629058  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.645189  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.645348  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.645364  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:17:04.759139  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:17:04.759186  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:17:04.759214  101831 ubuntu.go:177] setting up certificates
	I0522 18:17:04.759235  101831 provision.go:84] configureAuth start
	I0522 18:17:04.759332  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.775834  101831 provision.go:87] duration metric: took 16.584677ms to configureAuth
	W0522 18:17:04.775854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.775873  101831 retry.go:31] will retry after 126.959µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.777009  101831 provision.go:84] configureAuth start
	I0522 18:17:04.777074  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.793126  101831 provision.go:87] duration metric: took 16.098282ms to configureAuth
	W0522 18:17:04.793147  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.793164  101831 retry.go:31] will retry after 87.815µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.794272  101831 provision.go:84] configureAuth start
	I0522 18:17:04.794339  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.810002  101831 provision.go:87] duration metric: took 15.712157ms to configureAuth
	W0522 18:17:04.810023  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.810043  101831 retry.go:31] will retry after 160.401µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.811149  101831 provision.go:84] configureAuth start
	I0522 18:17:04.811208  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.826479  101831 provision.go:87] duration metric: took 15.314201ms to configureAuth
	W0522 18:17:04.826498  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.826513  101831 retry.go:31] will retry after 419.179µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.827621  101831 provision.go:84] configureAuth start
	I0522 18:17:04.827687  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.842837  101831 provision.go:87] duration metric: took 15.198634ms to configureAuth
	W0522 18:17:04.842854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.842870  101831 retry.go:31] will retry after 333.49µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.843983  101831 provision.go:84] configureAuth start
	I0522 18:17:04.844056  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.858999  101831 provision.go:87] duration metric: took 15.001015ms to configureAuth
	W0522 18:17:04.859014  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.859029  101831 retry.go:31] will retry after 831.427µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.860145  101831 provision.go:84] configureAuth start
	I0522 18:17:04.860207  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.874679  101831 provision.go:87] duration metric: took 14.517169ms to configureAuth
	W0522 18:17:04.874696  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.874710  101831 retry.go:31] will retry after 1.617455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.876883  101831 provision.go:84] configureAuth start
	I0522 18:17:04.876932  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.891845  101831 provision.go:87] duration metric: took 14.947571ms to configureAuth
	W0522 18:17:04.891860  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.891873  101831 retry.go:31] will retry after 1.45074ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.894054  101831 provision.go:84] configureAuth start
	I0522 18:17:04.894110  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.909207  101831 provision.go:87] duration metric: took 15.132147ms to configureAuth
	W0522 18:17:04.909224  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.909239  101831 retry.go:31] will retry after 2.781453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.912374  101831 provision.go:84] configureAuth start
	I0522 18:17:04.912425  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.927102  101831 provision.go:87] duration metric: took 14.710332ms to configureAuth
	W0522 18:17:04.927120  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.927135  101831 retry.go:31] will retry after 3.086595ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.930243  101831 provision.go:84] configureAuth start
	I0522 18:17:04.930304  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.944990  101831 provision.go:87] duration metric: took 14.727208ms to configureAuth
	W0522 18:17:04.945005  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.945020  101831 retry.go:31] will retry after 8.052612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.953127  101831 provision.go:84] configureAuth start
	I0522 18:17:04.953199  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.968194  101831 provision.go:87] duration metric: took 15.047376ms to configureAuth
	W0522 18:17:04.968211  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.968235  101831 retry.go:31] will retry after 12.227939ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.981403  101831 provision.go:84] configureAuth start
	I0522 18:17:04.981475  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.996918  101831 provision.go:87] duration metric: took 15.4993ms to configureAuth
	W0522 18:17:04.996933  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.996947  101831 retry.go:31] will retry after 9.372006ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.007135  101831 provision.go:84] configureAuth start
	I0522 18:17:05.007251  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.021722  101831 provision.go:87] duration metric: took 14.570245ms to configureAuth
	W0522 18:17:05.021738  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.021751  101831 retry.go:31] will retry after 23.298276ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.045949  101831 provision.go:84] configureAuth start
	I0522 18:17:05.046030  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.062577  101831 provision.go:87] duration metric: took 16.607282ms to configureAuth
	W0522 18:17:05.062597  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.062613  101831 retry.go:31] will retry after 40.757138ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.103799  101831 provision.go:84] configureAuth start
	I0522 18:17:05.103887  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.119482  101831 provision.go:87] duration metric: took 15.655062ms to configureAuth
	W0522 18:17:05.119499  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.119516  101831 retry.go:31] will retry after 38.095973ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.158702  101831 provision.go:84] configureAuth start
	I0522 18:17:05.158788  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.174198  101831 provision.go:87] duration metric: took 15.463621ms to configureAuth
	W0522 18:17:05.174214  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.174232  101831 retry.go:31] will retry after 48.82201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.223426  101831 provision.go:84] configureAuth start
	I0522 18:17:05.223513  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.239564  101831 provision.go:87] duration metric: took 16.11307ms to configureAuth
	W0522 18:17:05.239581  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.239597  101831 retry.go:31] will retry after 136.469602ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.376897  101831 provision.go:84] configureAuth start
	I0522 18:17:05.377009  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.393537  101831 provision.go:87] duration metric: took 16.613386ms to configureAuth
	W0522 18:17:05.393558  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.393575  101831 retry.go:31] will retry after 161.82385ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.555925  101831 provision.go:84] configureAuth start
	I0522 18:17:05.556033  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.572787  101831 provision.go:87] duration metric: took 16.830217ms to configureAuth
	W0522 18:17:05.572804  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.572824  101831 retry.go:31] will retry after 213.087725ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.786136  101831 provision.go:84] configureAuth start
	I0522 18:17:05.786249  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.801903  101831 provision.go:87] duration metric: took 15.735371ms to configureAuth
	W0522 18:17:05.801919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.801935  101831 retry.go:31] will retry after 367.249953ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.169404  101831 provision.go:84] configureAuth start
	I0522 18:17:06.169504  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.186269  101831 provision.go:87] duration metric: took 16.837758ms to configureAuth
	W0522 18:17:06.186288  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.186306  101831 retry.go:31] will retry after 668.860958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.856116  101831 provision.go:84] configureAuth start
	I0522 18:17:06.856211  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.872110  101831 provision.go:87] duration metric: took 15.968481ms to configureAuth
	W0522 18:17:06.872130  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.872145  101831 retry.go:31] will retry after 1.080057807s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.952333  101831 provision.go:84] configureAuth start
	I0522 18:17:07.952446  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:07.969099  101831 provision.go:87] duration metric: took 16.737681ms to configureAuth
	W0522 18:17:07.969119  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.969136  101831 retry.go:31] will retry after 1.35549681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.325582  101831 provision.go:84] configureAuth start
	I0522 18:17:09.325692  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:09.341763  101831 provision.go:87] duration metric: took 16.155925ms to configureAuth
	W0522 18:17:09.341780  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.341798  101831 retry.go:31] will retry after 1.897886244s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.240016  101831 provision.go:84] configureAuth start
	I0522 18:17:11.240140  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:11.257072  101831 provision.go:87] duration metric: took 17.02632ms to configureAuth
	W0522 18:17:11.257092  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.257114  101831 retry.go:31] will retry after 2.810888271s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.070011  101831 provision.go:84] configureAuth start
	I0522 18:17:14.070113  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:14.085901  101831 provision.go:87] duration metric: took 15.848159ms to configureAuth
	W0522 18:17:14.085919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.085935  101831 retry.go:31] will retry after 4.662344732s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.748720  101831 provision.go:84] configureAuth start
	I0522 18:17:18.748845  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:18.765467  101831 provision.go:87] duration metric: took 16.701835ms to configureAuth
	W0522 18:17:18.765486  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.765504  101831 retry.go:31] will retry after 3.216983163s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:21.983872  101831 provision.go:84] configureAuth start
	I0522 18:17:21.983984  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:22.000235  101831 provision.go:87] duration metric: took 16.33158ms to configureAuth
	W0522 18:17:22.000253  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:22.000269  101831 retry.go:31] will retry after 5.251668241s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.253805  101831 provision.go:84] configureAuth start
	I0522 18:17:27.253896  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:27.270555  101831 provision.go:87] duration metric: took 16.716068ms to configureAuth
	W0522 18:17:27.270575  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.270593  101831 retry.go:31] will retry after 7.113433713s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.388102  101831 provision.go:84] configureAuth start
	I0522 18:17:34.388187  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:34.404845  101831 provision.go:87] duration metric: took 16.712516ms to configureAuth
	W0522 18:17:34.404862  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.404878  101831 retry.go:31] will retry after 14.943192814s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.349248  101831 provision.go:84] configureAuth start
	I0522 18:17:49.349327  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:49.365985  101831 provision.go:87] duration metric: took 16.710371ms to configureAuth
	W0522 18:17:49.366002  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.366018  101831 retry.go:31] will retry after 20.509395565s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.875559  101831 provision.go:84] configureAuth start
	I0522 18:18:09.875637  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:09.892771  101831 provision.go:87] duration metric: took 17.18443ms to configureAuth
	W0522 18:18:09.892792  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.892808  101831 retry.go:31] will retry after 43.941504091s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.837442  101831 provision.go:84] configureAuth start
	I0522 18:18:53.837525  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:53.854156  101831 provision.go:87] duration metric: took 16.677406ms to configureAuth
	W0522 18:18:53.854181  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854199  101831 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854204  101831 machine.go:97] duration metric: took 1m49.491432011s to provisionDockerMachine
	I0522 18:18:53.854270  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:18:53.854308  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:18:53.869467  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:18:53.955836  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:18:53.959906  101831 fix.go:56] duration metric: took 1m49.616394756s for fixHost
	I0522 18:18:53.959927  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m49.61643748s
	W0522 18:18:53.960003  101831 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.962122  101831 out.go:177] 
	W0522 18:18:53.963599  101831 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:18:53.963614  101831 out.go:239] * 
	* 
	W0522 18:18:53.964392  101831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:18:53.965343  101831 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-linux-amd64 start -p ha-828033 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:15:10.483132321Z",
	            "FinishedAt": "2024-05-22T18:15:09.116884079Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e612b115b826e3419d82d7b81443bb337ae8736fcd5da15e19129972417863e7",
	            "SandboxKey": "/var/run/docker/netns/e612b115b826",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "e2ea99d68522c5a32290bcf1c36c6f217acb3d5d61a816c7582d4e1903563b0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (238.724125ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-828033 stop -v=7                                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC | 22 May 24 18:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true                                                         | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=docker                                                       |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:15:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:15:10.052711  101831 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:10.052945  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.052953  101831 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:10.052957  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.053112  101831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:10.053580  101831 out.go:298] Setting JSON to false
	I0522 18:15:10.054415  101831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3454,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:15:10.054471  101831 start.go:139] virtualization: kvm guest
	I0522 18:15:10.056675  101831 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:15:10.058040  101831 notify.go:220] Checking for updates...
	I0522 18:15:10.058046  101831 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:15:10.059343  101831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:15:10.060677  101831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:10.061800  101831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:15:10.062877  101831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:15:10.064091  101831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:15:10.065687  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:10.066119  101831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:15:10.086670  101831 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:15:10.086771  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.130648  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.122350286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.130754  101831 docker.go:295] overlay module found
	I0522 18:15:10.132447  101831 out.go:177] * Using the docker driver based on existing profile
	I0522 18:15:10.133511  101831 start.go:297] selected driver: docker
	I0522 18:15:10.133528  101831 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.133615  101831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:15:10.133693  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.178797  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.170730392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.179465  101831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:15:10.179495  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:10.179504  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:10.179557  101831 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.181838  101831 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:15:10.182862  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:15:10.184066  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:15:10.185142  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:10.185165  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:15:10.185172  101831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:15:10.185187  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:15:10.185275  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:15:10.185286  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:15:10.185372  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.199839  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:15:10.199866  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:15:10.199888  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:15:10.199920  101831 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:15:10.199975  101831 start.go:364] duration metric: took 36.63µs to acquireMachinesLock for "ha-828033"
	I0522 18:15:10.199991  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:15:10.200001  101831 fix.go:54] fixHost starting: 
	I0522 18:15:10.200212  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.216528  101831 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:15:10.216569  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:15:10.218337  101831 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:15:10.219502  101831 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:15:10.489901  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.507723  101831 kic.go:430] container "ha-828033" state is running.
	I0522 18:15:10.508126  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:10.527137  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.527348  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:15:10.527408  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:10.544792  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:10.545081  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:10.545103  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:15:10.545690  101831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:32817: read: connection reset by peer
	I0522 18:15:13.662862  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.662903  101831 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:15:13.662964  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.679655  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.679834  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.679848  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:15:13.801105  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.801184  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.817648  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.817828  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.817845  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:15:13.931153  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:13.931179  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:15:13.931217  101831 ubuntu.go:177] setting up certificates
	I0522 18:15:13.931238  101831 provision.go:84] configureAuth start
	I0522 18:15:13.931311  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:13.947388  101831 provision.go:143] copyHostCerts
	I0522 18:15:13.947420  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947445  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:15:13.947460  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947524  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:15:13.947607  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947625  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:15:13.947628  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947654  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:15:13.947696  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947711  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:15:13.947717  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947737  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:15:13.947784  101831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:15:14.398357  101831 provision.go:177] copyRemoteCerts
	I0522 18:15:14.398411  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:15:14.398442  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.414166  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:14.499249  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:15:14.499326  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:15:14.520994  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:15:14.521050  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 18:15:14.540775  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:15:14.540816  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 18:15:14.560240  101831 provision.go:87] duration metric: took 628.988417ms to configureAuth
	I0522 18:15:14.560262  101831 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:15:14.560422  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:14.560469  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.576177  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.576336  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.576348  101831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:15:14.687318  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:15:14.687343  101831 ubuntu.go:71] root file system type: overlay
	I0522 18:15:14.687455  101831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:15:14.687517  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.704102  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.704323  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.704424  101831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:15:14.825449  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:15:14.825531  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.841507  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.841715  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.841741  101831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:15:14.955461  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:14.955484  101831 machine.go:97] duration metric: took 4.428121798s to provisionDockerMachine
	I0522 18:15:14.955497  101831 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:15:14.955511  101831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:15:14.955559  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:15:14.955599  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.970693  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.055854  101831 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:15:15.058722  101831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:15:15.058760  101831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:15:15.058771  101831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:15:15.058780  101831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:15:15.058789  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:15:15.058832  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:15:15.058903  101831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:15:15.058914  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:15:15.058993  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:15:15.066158  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:15.086000  101831 start.go:296] duration metric: took 130.491ms for postStartSetup
	I0522 18:15:15.086056  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:15.086093  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.101977  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.183666  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:15.187576  101831 fix.go:56] duration metric: took 4.987575013s for fixHost
	I0522 18:15:15.187597  101831 start.go:83] releasing machines lock for "ha-828033", held for 4.987611005s
	I0522 18:15:15.187662  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:15.203730  101831 ssh_runner.go:195] Run: cat /version.json
	I0522 18:15:15.203784  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.203832  101831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:15:15.203905  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.219620  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.220317  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.298438  101831 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:15.369455  101831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:15:15.373670  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:15:15.389963  101831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:15:15.390037  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:15:15.397635  101831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:15:15.397661  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.397689  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.397785  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.411498  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:15:15.419815  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:15:15.428116  101831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.428162  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:15:15.436218  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.444432  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:15:15.452463  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.460889  101831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:15:15.468598  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:15:15.476986  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:15:15.485179  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:15:15.493301  101831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:15:15.500194  101831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:15:15.506903  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:15.578809  101831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:15:15.647535  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.647580  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.647625  101831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:15:15.659341  101831 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:15:15.659408  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:15:15.670447  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.687181  101831 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:15:15.690280  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:15:15.698889  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:15:15.716155  101831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:15:15.849757  101831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:15:15.927002  101831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.927199  101831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:15:15.958682  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.035955  101831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:15:16.309267  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:15:16.319069  101831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:15:16.329406  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.338954  101831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:15:16.411316  101831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:15:16.482185  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.558123  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:15:16.569903  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.579592  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.654464  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:15:16.713660  101831 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:15:16.713739  101831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:15:16.717169  101831 start.go:562] Will wait 60s for crictl version
	I0522 18:15:16.717224  101831 ssh_runner.go:195] Run: which crictl
	I0522 18:15:16.720182  101831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:15:16.750802  101831 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:15:16.750855  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.772501  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.795663  101831 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:15:16.795751  101831 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:15:16.811580  101831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:15:16.814850  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:16.824839  101831 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:15:16.824958  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:16.825025  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.842616  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.842633  101831 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:15:16.842688  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.859091  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.859115  101831 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:15:16.859131  101831 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:15:16.859251  101831 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:15:16.859326  101831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:15:16.902852  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:16.902868  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:16.902882  101831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:15:16.902904  101831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:15:16.903073  101831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:15:16.903091  101831 kube-vip.go:115] generating kube-vip config ...
	I0522 18:15:16.903133  101831 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:15:16.913846  101831 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:16.913951  101831 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:15:16.914004  101831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:15:16.921502  101831 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:15:16.921564  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:15:16.928993  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:15:16.944153  101831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:15:16.959523  101831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:15:16.974202  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:15:16.988963  101831 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:15:16.991795  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:17.000800  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:17.079221  101831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:15:17.090798  101831 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:15:17.090820  101831 certs.go:194] generating shared ca certs ...
	I0522 18:15:17.090844  101831 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.090965  101831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:15:17.091002  101831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:15:17.091008  101831 certs.go:256] generating profile certs ...
	I0522 18:15:17.091078  101831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:15:17.091129  101831 certs.go:616] failed to parse cert file /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: x509: cannot parse IP address of length 0
	I0522 18:15:17.091199  101831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:15:17.091213  101831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:15:17.140524  101831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:15:17.140548  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140659  101831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:15:17.140670  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140730  101831 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:15:17.140925  101831 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:15:17.141101  101831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:15:17.141119  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:15:17.141133  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:15:17.141147  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:15:17.141170  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:15:17.141187  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:15:17.141204  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:15:17.141219  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:15:17.141242  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:15:17.141303  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:15:17.141346  101831 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:15:17.141359  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:15:17.141388  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:15:17.141417  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:15:17.141446  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:15:17.141496  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:17.141532  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.141552  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.141573  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.142334  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:15:17.168748  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:15:17.251949  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:15:17.279089  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:15:17.360292  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:15:17.382285  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:15:17.402361  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:15:17.422080  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:15:17.441696  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:15:17.461724  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:15:17.481252  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:15:17.500617  101831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:15:17.515028  101831 ssh_runner.go:195] Run: openssl version
	I0522 18:15:17.519598  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:15:17.527181  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530162  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530202  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.535963  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:15:17.543306  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:15:17.551068  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553913  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553960  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.559966  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:15:17.567478  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:15:17.575235  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578146  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578200  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.584135  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:15:17.591800  101831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:15:17.594551  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:15:17.600342  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:15:17.606283  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:15:17.611975  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:15:17.617679  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:15:17.623211  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:15:17.628747  101831 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:17.628861  101831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:15:17.645553  101831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:15:17.653137  101831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:15:17.653154  101831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:15:17.653158  101831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:15:17.653194  101831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:15:17.660437  101831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:17.660808  101831 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.660901  101831 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:15:17.661141  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.661490  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.661685  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.662092  101831 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:15:17.662244  101831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:15:17.669585  101831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:15:17.669601  101831 kubeadm.go:591] duration metric: took 16.438601ms to restartPrimaryControlPlane
	I0522 18:15:17.669608  101831 kubeadm.go:393] duration metric: took 40.865584ms to StartCluster
	I0522 18:15:17.669620  101831 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.669675  101831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.670178  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.670340  101831 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:15:17.670358  101831 start.go:240] waiting for startup goroutines ...
	I0522 18:15:17.670369  101831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:15:17.670406  101831 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:15:17.670424  101831 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:15:17.670437  101831 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	W0522 18:15:17.670444  101831 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:15:17.670452  101831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 18:15:17.670468  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.670519  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:17.670698  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.670784  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.689774  101831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:15:17.689555  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.691107  101831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.691126  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:15:17.691169  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.691305  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.691526  101831 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:15:17.691538  101831 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:15:17.691559  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.691847  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.710078  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.710513  101831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:17.710529  101831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:15:17.710565  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.726905  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.803514  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.818704  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:17.855350  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.855404  101831 retry.go:31] will retry after 232.813174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:17.869892  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.869918  101831 retry.go:31] will retry after 317.212878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.089255  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.139447  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.139480  101831 retry.go:31] will retry after 388.464948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.187648  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:18.237073  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.237097  101831 retry.go:31] will retry after 286.046895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.523727  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:18.528673  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.578085  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.578120  101831 retry.go:31] will retry after 730.017926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:18.580563  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.580590  101831 retry.go:31] will retry after 575.328536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.156346  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:19.207853  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.207882  101831 retry.go:31] will retry after 904.065015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.309074  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:19.360363  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.360398  101831 retry.go:31] will retry after 668.946527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.030373  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:20.081266  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.081297  101831 retry.go:31] will retry after 1.581516451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.112442  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:20.162392  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.162423  101831 retry.go:31] will retry after 799.963515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.962767  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:21.014221  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.014258  101831 retry.go:31] will retry after 2.627281568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.663009  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:21.716311  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.716340  101831 retry.go:31] will retry after 973.454643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.690502  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:22.742767  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.742794  101831 retry.go:31] will retry after 3.340789148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.641773  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:23.775204  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.775240  101831 retry.go:31] will retry after 2.671895107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.083777  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:26.134578  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.134608  101831 retry.go:31] will retry after 4.298864045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.448092  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:26.499632  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.499662  101831 retry.go:31] will retry after 5.525229223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.434210  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:30.485401  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.485428  101831 retry.go:31] will retry after 4.916959612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.025957  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:32.076991  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.077021  101831 retry.go:31] will retry after 7.245842793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.402632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:35.454254  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.454282  101831 retry.go:31] will retry after 10.414070295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.324207  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:39.375910  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.375942  101831 retry.go:31] will retry after 9.156494241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.868576  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:45.920031  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.920063  101831 retry.go:31] will retry after 14.404576525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.532789  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:48.585261  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.585294  101831 retry.go:31] will retry after 17.974490677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.325688  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:00.377854  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.377897  101831 retry.go:31] will retry after 11.577079387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.561241  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:06.612860  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.612894  101831 retry.go:31] will retry after 14.583164714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:11.956632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:12.008606  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:12.008639  101831 retry.go:31] will retry after 46.302827634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.196878  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:21.247130  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.247161  101831 retry.go:31] will retry after 25.952174169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:47.199672  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:47.251576  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:47.251667  101831 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.312157  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:58.364469  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:58.364578  101831 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.366416  101831 out.go:177] * Enabled addons: 
	I0522 18:16:58.367516  101831 addons.go:505] duration metric: took 1m40.697149813s for enable addons: enabled=[]
	I0522 18:16:58.367546  101831 start.go:245] waiting for cluster config update ...
	I0522 18:16:58.367558  101831 start.go:254] writing updated cluster config ...
	I0522 18:16:58.369066  101831 out.go:177] 
	I0522 18:16:58.370289  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:16:58.370344  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.371848  101831 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:16:58.373273  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:16:58.374502  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:16:58.375701  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:16:58.375722  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:16:58.375727  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:16:58.375816  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:16:58.375840  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:16:58.375916  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.392272  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:16:58.392290  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:16:58.392305  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:16:58.392330  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:16:58.392384  101831 start.go:364] duration metric: took 37.403µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:16:58.392400  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:16:58.392405  101831 fix.go:54] fixHost starting: m02
	I0522 18:16:58.392601  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.408748  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:16:58.408768  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:16:58.410677  101831 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:16:58.411822  101831 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:16:58.662201  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.678298  101831 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:16:58.678749  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:16:58.695431  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:16:58.695483  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:16:58.710353  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:16:58.711129  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.711158  101831 retry.go:31] will retry after 162.419442ms: ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	W0522 18:16:58.874922  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.874949  101831 retry.go:31] will retry after 374.487623ms: ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:59.335651  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:16:59.339485  101831 fix.go:56] duration metric: took 947.0745ms for fixHost
	I0522 18:16:59.339510  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 947.115875ms
	W0522 18:16:59.339525  101831 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:16:59.339587  101831 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:16:59.339604  101831 start.go:728] Will try again in 5 seconds ...
	I0522 18:17:04.343396  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:17:04.343479  101831 start.go:364] duration metric: took 52.078µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:17:04.343499  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:17:04.343506  101831 fix.go:54] fixHost starting: m02
	I0522 18:17:04.343719  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:17:04.359537  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:17:04.359560  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:17:04.361525  101831 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:17:04.362763  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:17:04.362823  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.378286  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.378448  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.378458  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:17:04.490382  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.490408  101831 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:17:04.490471  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.506007  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.506177  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.506191  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:17:04.628978  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.629058  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.645189  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.645348  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.645364  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:17:04.759139  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:17:04.759186  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:17:04.759214  101831 ubuntu.go:177] setting up certificates
	I0522 18:17:04.759235  101831 provision.go:84] configureAuth start
	I0522 18:17:04.759332  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.775834  101831 provision.go:87] duration metric: took 16.584677ms to configureAuth
	W0522 18:17:04.775854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.775873  101831 retry.go:31] will retry after 126.959µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.777009  101831 provision.go:84] configureAuth start
	I0522 18:17:04.777074  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.793126  101831 provision.go:87] duration metric: took 16.098282ms to configureAuth
	W0522 18:17:04.793147  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.793164  101831 retry.go:31] will retry after 87.815µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.794272  101831 provision.go:84] configureAuth start
	I0522 18:17:04.794339  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.810002  101831 provision.go:87] duration metric: took 15.712157ms to configureAuth
	W0522 18:17:04.810023  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.810043  101831 retry.go:31] will retry after 160.401µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.811149  101831 provision.go:84] configureAuth start
	I0522 18:17:04.811208  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.826479  101831 provision.go:87] duration metric: took 15.314201ms to configureAuth
	W0522 18:17:04.826498  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.826513  101831 retry.go:31] will retry after 419.179µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.827621  101831 provision.go:84] configureAuth start
	I0522 18:17:04.827687  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.842837  101831 provision.go:87] duration metric: took 15.198634ms to configureAuth
	W0522 18:17:04.842854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.842870  101831 retry.go:31] will retry after 333.49µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.843983  101831 provision.go:84] configureAuth start
	I0522 18:17:04.844056  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.858999  101831 provision.go:87] duration metric: took 15.001015ms to configureAuth
	W0522 18:17:04.859014  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.859029  101831 retry.go:31] will retry after 831.427µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.860145  101831 provision.go:84] configureAuth start
	I0522 18:17:04.860207  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.874679  101831 provision.go:87] duration metric: took 14.517169ms to configureAuth
	W0522 18:17:04.874696  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.874710  101831 retry.go:31] will retry after 1.617455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.876883  101831 provision.go:84] configureAuth start
	I0522 18:17:04.876932  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.891845  101831 provision.go:87] duration metric: took 14.947571ms to configureAuth
	W0522 18:17:04.891860  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.891873  101831 retry.go:31] will retry after 1.45074ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.894054  101831 provision.go:84] configureAuth start
	I0522 18:17:04.894110  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.909207  101831 provision.go:87] duration metric: took 15.132147ms to configureAuth
	W0522 18:17:04.909224  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.909239  101831 retry.go:31] will retry after 2.781453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.912374  101831 provision.go:84] configureAuth start
	I0522 18:17:04.912425  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.927102  101831 provision.go:87] duration metric: took 14.710332ms to configureAuth
	W0522 18:17:04.927120  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.927135  101831 retry.go:31] will retry after 3.086595ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.930243  101831 provision.go:84] configureAuth start
	I0522 18:17:04.930304  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.944990  101831 provision.go:87] duration metric: took 14.727208ms to configureAuth
	W0522 18:17:04.945005  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.945020  101831 retry.go:31] will retry after 8.052612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.953127  101831 provision.go:84] configureAuth start
	I0522 18:17:04.953199  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.968194  101831 provision.go:87] duration metric: took 15.047376ms to configureAuth
	W0522 18:17:04.968211  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.968235  101831 retry.go:31] will retry after 12.227939ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.981403  101831 provision.go:84] configureAuth start
	I0522 18:17:04.981475  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.996918  101831 provision.go:87] duration metric: took 15.4993ms to configureAuth
	W0522 18:17:04.996933  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.996947  101831 retry.go:31] will retry after 9.372006ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.007135  101831 provision.go:84] configureAuth start
	I0522 18:17:05.007251  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.021722  101831 provision.go:87] duration metric: took 14.570245ms to configureAuth
	W0522 18:17:05.021738  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.021751  101831 retry.go:31] will retry after 23.298276ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.045949  101831 provision.go:84] configureAuth start
	I0522 18:17:05.046030  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.062577  101831 provision.go:87] duration metric: took 16.607282ms to configureAuth
	W0522 18:17:05.062597  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.062613  101831 retry.go:31] will retry after 40.757138ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.103799  101831 provision.go:84] configureAuth start
	I0522 18:17:05.103887  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.119482  101831 provision.go:87] duration metric: took 15.655062ms to configureAuth
	W0522 18:17:05.119499  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.119516  101831 retry.go:31] will retry after 38.095973ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.158702  101831 provision.go:84] configureAuth start
	I0522 18:17:05.158788  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.174198  101831 provision.go:87] duration metric: took 15.463621ms to configureAuth
	W0522 18:17:05.174214  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.174232  101831 retry.go:31] will retry after 48.82201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.223426  101831 provision.go:84] configureAuth start
	I0522 18:17:05.223513  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.239564  101831 provision.go:87] duration metric: took 16.11307ms to configureAuth
	W0522 18:17:05.239581  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.239597  101831 retry.go:31] will retry after 136.469602ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.376897  101831 provision.go:84] configureAuth start
	I0522 18:17:05.377009  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.393537  101831 provision.go:87] duration metric: took 16.613386ms to configureAuth
	W0522 18:17:05.393558  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.393575  101831 retry.go:31] will retry after 161.82385ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.555925  101831 provision.go:84] configureAuth start
	I0522 18:17:05.556033  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.572787  101831 provision.go:87] duration metric: took 16.830217ms to configureAuth
	W0522 18:17:05.572804  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.572824  101831 retry.go:31] will retry after 213.087725ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.786136  101831 provision.go:84] configureAuth start
	I0522 18:17:05.786249  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.801903  101831 provision.go:87] duration metric: took 15.735371ms to configureAuth
	W0522 18:17:05.801919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.801935  101831 retry.go:31] will retry after 367.249953ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.169404  101831 provision.go:84] configureAuth start
	I0522 18:17:06.169504  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.186269  101831 provision.go:87] duration metric: took 16.837758ms to configureAuth
	W0522 18:17:06.186288  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.186306  101831 retry.go:31] will retry after 668.860958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.856116  101831 provision.go:84] configureAuth start
	I0522 18:17:06.856211  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.872110  101831 provision.go:87] duration metric: took 15.968481ms to configureAuth
	W0522 18:17:06.872130  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.872145  101831 retry.go:31] will retry after 1.080057807s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.952333  101831 provision.go:84] configureAuth start
	I0522 18:17:07.952446  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:07.969099  101831 provision.go:87] duration metric: took 16.737681ms to configureAuth
	W0522 18:17:07.969119  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.969136  101831 retry.go:31] will retry after 1.35549681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.325582  101831 provision.go:84] configureAuth start
	I0522 18:17:09.325692  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:09.341763  101831 provision.go:87] duration metric: took 16.155925ms to configureAuth
	W0522 18:17:09.341780  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.341798  101831 retry.go:31] will retry after 1.897886244s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.240016  101831 provision.go:84] configureAuth start
	I0522 18:17:11.240140  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:11.257072  101831 provision.go:87] duration metric: took 17.02632ms to configureAuth
	W0522 18:17:11.257092  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.257114  101831 retry.go:31] will retry after 2.810888271s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.070011  101831 provision.go:84] configureAuth start
	I0522 18:17:14.070113  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:14.085901  101831 provision.go:87] duration metric: took 15.848159ms to configureAuth
	W0522 18:17:14.085919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.085935  101831 retry.go:31] will retry after 4.662344732s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.748720  101831 provision.go:84] configureAuth start
	I0522 18:17:18.748845  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:18.765467  101831 provision.go:87] duration metric: took 16.701835ms to configureAuth
	W0522 18:17:18.765486  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.765504  101831 retry.go:31] will retry after 3.216983163s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:21.983872  101831 provision.go:84] configureAuth start
	I0522 18:17:21.983984  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:22.000235  101831 provision.go:87] duration metric: took 16.33158ms to configureAuth
	W0522 18:17:22.000253  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:22.000269  101831 retry.go:31] will retry after 5.251668241s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.253805  101831 provision.go:84] configureAuth start
	I0522 18:17:27.253896  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:27.270555  101831 provision.go:87] duration metric: took 16.716068ms to configureAuth
	W0522 18:17:27.270575  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.270593  101831 retry.go:31] will retry after 7.113433713s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.388102  101831 provision.go:84] configureAuth start
	I0522 18:17:34.388187  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:34.404845  101831 provision.go:87] duration metric: took 16.712516ms to configureAuth
	W0522 18:17:34.404862  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.404878  101831 retry.go:31] will retry after 14.943192814s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.349248  101831 provision.go:84] configureAuth start
	I0522 18:17:49.349327  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:49.365985  101831 provision.go:87] duration metric: took 16.710371ms to configureAuth
	W0522 18:17:49.366002  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.366018  101831 retry.go:31] will retry after 20.509395565s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.875559  101831 provision.go:84] configureAuth start
	I0522 18:18:09.875637  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:09.892771  101831 provision.go:87] duration metric: took 17.18443ms to configureAuth
	W0522 18:18:09.892792  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.892808  101831 retry.go:31] will retry after 43.941504091s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.837442  101831 provision.go:84] configureAuth start
	I0522 18:18:53.837525  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:53.854156  101831 provision.go:87] duration metric: took 16.677406ms to configureAuth
	W0522 18:18:53.854181  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854199  101831 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854204  101831 machine.go:97] duration metric: took 1m49.491432011s to provisionDockerMachine
	I0522 18:18:53.854270  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:18:53.854308  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:18:53.869467  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:18:53.955836  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:18:53.959906  101831 fix.go:56] duration metric: took 1m49.616394756s for fixHost
	I0522 18:18:53.959927  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m49.61643748s
	W0522 18:18:53.960003  101831 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.962122  101831 out.go:177] 
	W0522 18:18:53.963599  101831 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:18:53.963614  101831 out.go:239] * 
	W0522 18:18:53.964392  101831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:18:53.965343  101831 out.go:177] 
	
	
	==> Docker <==
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Loaded network plugin cni"
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Docker cri networking managed by network plugin cni"
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Setting cgroupDriver cgroupfs"
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 22 18:15:16 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:16Z" level=info msg="Start cri-dockerd grpc backend"
	May 22 18:15:16 ha-828033 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 22 18:15:17 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-nhhq2_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"bc32c92f2fa0451f2154953804d41863edba21af2f870a0567808c1f52d63863\""
	May 22 18:15:17 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-dxfhb_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"cddac885b8c2a5bed4d06b9a6e03613c3fcbf0b9a85b1b7454ca9ec3efb09323\""
	May 22 18:15:17 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-gznzs_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"921c71ab51b2950a38ec6a5d23870de457a14e415a7ad00f912cc87e12bfa805\""
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"4d7edccdc49b22ec9cc59e71bc3d4f4089c78b1b448eab3c8012fc9a32dfc290\". Proceed without further sandbox information."
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7621536a2355b2ed17fd4826a46eb34353e1722d46121f4d8dce21cf104fbc3b/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7d8f14facc121954daf7040ecb42f0057a6d74fba080c60250d0c9b989d2dfd/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/169a6c9879eda81053b206f012ab25b5f0eab53a63140c4df4ccf50c3bf4f0a8/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf110b30ae61d12f067b4860abfb748b3ff223ad9c7997058c44f608448355f5/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e7d4995ae7f40c29c41768b1646800c9d56bf16def7edda6675463502dc5789/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:24 ha-828033 dockerd[952]: time="2024-05-22T18:15:24.393467012Z" level=info msg="ignoring event" container=99d2c0c3cbaaf9c3094945d15fbe7995850de5fe0f8215e33718701064ccca2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:35 ha-828033 dockerd[952]: time="2024-05-22T18:15:35.054053701Z" level=info msg="ignoring event" container=2d0a6ba7a450da81bb16bc8444c168516f57535d780754ce5af0d172617d2e8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:43 ha-828033 dockerd[952]: time="2024-05-22T18:15:43.326912988Z" level=info msg="ignoring event" container=b914a7a4842a45e3ccfddeaa77ddd5c83dc42be0332e2dd7aeb910b171c45311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:05 ha-828033 dockerd[952]: time="2024-05-22T18:16:05.910236913Z" level=info msg="ignoring event" container=a14905099cdd7b0890af07bfa6aa108458a0f47f512d250892828479545eb84d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:06 ha-828033 dockerd[952]: time="2024-05-22T18:16:06.328112357Z" level=info msg="ignoring event" container=3c07ff06f6142b2c6755fab16a43b9429a3ce820e788dd8dd5771c15e0e8204a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:44 ha-828033 dockerd[952]: time="2024-05-22T18:16:44.846118227Z" level=info msg="ignoring event" container=599792a4e3b530d1362c8ff4422680844fe90e1954f97299ed1ef13e8a71ddd0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:51 ha-828033 dockerd[952]: time="2024-05-22T18:16:51.325933990Z" level=info msg="ignoring event" container=4201938c43072029791dd84316a9daa6974d688f5001ca4319de67fe458d1ffb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:17:41 ha-828033 dockerd[952]: time="2024-05-22T18:17:41.181907947Z" level=info msg="ignoring event" container=9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:18:20 ha-828033 dockerd[952]: time="2024-05-22T18:18:20.328644913Z" level=info msg="ignoring event" container=ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ab6eec742dda2       91be940803172                                                                                         34 seconds ago       Exited              kube-apiserver            10                  169a6c9879eda       kube-apiserver-ha-828033
	9df3be5b44482       25a1387cdab82                                                                                         About a minute ago   Exited              kube-controller-manager   8                   6e7d4995ae7f4       kube-controller-manager-ha-828033
	0d8fa2694d165       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   bf110b30ae61d       kube-vip-ha-828033
	a3b9aabcf43d5       a52dc94f0a912                                                                                         3 minutes ago        Running             kube-scheduler            2                   7621536a2355b       kube-scheduler-ha-828033
	237edba91c861       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      2                   a7d8f14facc12       etcd-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         7 minutes ago        Exited              kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              25 minutes ago       Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         25 minutes ago       Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         25 minutes ago       Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     25 minutes ago       Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:18:54.796695    3491 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [237edba91c86] <==
	{"level":"info","ts":"2024-05-22T18:15:24.171999Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:15:24.172115Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-22T18:15:24.172313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-22T18:15:24.172381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:15:24.172464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.172492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.175121Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:15:24.17563Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:15:24.175673Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:15:24.175795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:24.175803Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:25.561806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.562871Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:15:25.562879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.562911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.563078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.563101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.564753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:15:25.564849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:15:08.835661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:15:08.835741Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:15:08.835877Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.837589Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83762Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:15:08.837698Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:15:08.8412Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841311Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841321Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:18:54 up  1:01,  0 users,  load average: 0.10, 0.21, 0.35
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ab6eec742dda] <==
	I0522 18:18:20.314736       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:18:20.315573       1 server.go:148] Version: v1.30.1
	I0522 18:18:20.315620       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:18:20.316026       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [9df3be5b4448] <==
	I0522 18:17:30.673877       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:17:31.149959       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:17:31.149983       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:17:31.151310       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:17:31.151319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:17:31.151615       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:17:31.151721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:17:41.152974       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:08.822760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.822810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.835649       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0522 18:15:08.835866       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a3b9aabcf43d] <==
	E0522 18:18:11.642475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:11.835721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:11.835780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:14.623604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:14.623670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:16.437879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:16.437942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:21.354024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:21.354088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:22.872425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:22.872466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:24.992452       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:24.992522       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:25.605464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:25.605527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:32.504648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:32.504690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:33.956300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:33.956359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:38.236258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:38.236301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:51.929565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:51.929609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:54.447379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:54.447426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kubelet <==
	May 22 18:18:27 ha-828033 kubelet[1391]: E0522 18:18:27.277263    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:27 ha-828033 kubelet[1391]: E0522 18:18:27.615576    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:29 ha-828033 kubelet[1391]: I0522 18:18:29.183696    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:29 ha-828033 kubelet[1391]: E0522 18:18:29.184271    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:31 ha-828033 kubelet[1391]: I0522 18:18:31.546287    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759596    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759605    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:36 ha-828033 kubelet[1391]: W0522 18:18:36.831650    1391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:36 ha-828033 kubelet[1391]: E0522 18:18:36.831737    1391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:37 ha-828033 kubelet[1391]: E0522 18:18:37.277567    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:39 ha-828033 kubelet[1391]: E0522 18:18:39.903642    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:40 ha-828033 kubelet[1391]: I0522 18:18:40.761060    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:41 ha-828033 kubelet[1391]: I0522 18:18:41.183405    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:41 ha-828033 kubelet[1391]: E0522 18:18:41.183871    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:42 ha-828033 kubelet[1391]: I0522 18:18:42.183526    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.183890    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975535    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975547    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:47 ha-828033 kubelet[1391]: E0522 18:18:47.278535    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:49 ha-828033 kubelet[1391]: I0522 18:18:49.976988    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191597    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191602    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191623    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:53 ha-828033 kubelet[1391]: I0522 18:18:53.183075    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:53 ha-828033 kubelet[1391]: E0522 18:18:53.183526    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (248.601487ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/RestartCluster (225.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:413: expected profile "ha-828033" in json of 'profile list' to have "Degraded" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSha
resRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}
,{\"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientP
ath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:15:10.483132321Z",
	            "FinishedAt": "2024-05-22T18:15:09.116884079Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e612b115b826e3419d82d7b81443bb337ae8736fcd5da15e19129972417863e7",
	            "SandboxKey": "/var/run/docker/netns/e612b115b826",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "e2ea99d68522c5a32290bcf1c36c6f217acb3d5d61a816c7582d4e1903563b0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (238.980126ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterClusterRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterClusterRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterClusterRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-828033 -- exec                                                             | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | busybox-fc5497c4f-x4bg9                                                          |           |         |         |                     |                     |
	|         | -- sh -c nslookup                                                                |           |         |         |                     |                     |
	|         | host.minikube.internal | awk                                                     |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                                                          |           |         |         |                     |                     |
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-828033 stop -v=7                                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC | 22 May 24 18:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true                                                         | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=docker                                                       |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:15:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:15:10.052711  101831 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:10.052945  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.052953  101831 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:10.052957  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.053112  101831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:10.053580  101831 out.go:298] Setting JSON to false
	I0522 18:15:10.054415  101831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3454,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:15:10.054471  101831 start.go:139] virtualization: kvm guest
	I0522 18:15:10.056675  101831 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:15:10.058040  101831 notify.go:220] Checking for updates...
	I0522 18:15:10.058046  101831 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:15:10.059343  101831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:15:10.060677  101831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:10.061800  101831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:15:10.062877  101831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:15:10.064091  101831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:15:10.065687  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:10.066119  101831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:15:10.086670  101831 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:15:10.086771  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.130648  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.122350286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.130754  101831 docker.go:295] overlay module found
	I0522 18:15:10.132447  101831 out.go:177] * Using the docker driver based on existing profile
	I0522 18:15:10.133511  101831 start.go:297] selected driver: docker
	I0522 18:15:10.133528  101831 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.133615  101831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:15:10.133693  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.178797  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.170730392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.179465  101831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:15:10.179495  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:10.179504  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:10.179557  101831 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.181838  101831 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:15:10.182862  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:15:10.184066  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:15:10.185142  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:10.185165  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:15:10.185172  101831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:15:10.185187  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:15:10.185275  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:15:10.185286  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:15:10.185372  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.199839  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:15:10.199866  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:15:10.199888  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:15:10.199920  101831 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:15:10.199975  101831 start.go:364] duration metric: took 36.63µs to acquireMachinesLock for "ha-828033"
	I0522 18:15:10.199991  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:15:10.200001  101831 fix.go:54] fixHost starting: 
	I0522 18:15:10.200212  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.216528  101831 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:15:10.216569  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:15:10.218337  101831 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:15:10.219502  101831 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:15:10.489901  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.507723  101831 kic.go:430] container "ha-828033" state is running.
	I0522 18:15:10.508126  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:10.527137  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.527348  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:15:10.527408  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:10.544792  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:10.545081  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:10.545103  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:15:10.545690  101831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:32817: read: connection reset by peer
	I0522 18:15:13.662862  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.662903  101831 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:15:13.662964  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.679655  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.679834  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.679848  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:15:13.801105  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.801184  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.817648  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.817828  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.817845  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:15:13.931153  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:13.931179  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:15:13.931217  101831 ubuntu.go:177] setting up certificates
	I0522 18:15:13.931238  101831 provision.go:84] configureAuth start
	I0522 18:15:13.931311  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:13.947388  101831 provision.go:143] copyHostCerts
	I0522 18:15:13.947420  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947445  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:15:13.947460  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947524  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:15:13.947607  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947625  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:15:13.947628  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947654  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:15:13.947696  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947711  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:15:13.947717  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947737  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:15:13.947784  101831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:15:14.398357  101831 provision.go:177] copyRemoteCerts
	I0522 18:15:14.398411  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:15:14.398442  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.414166  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:14.499249  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:15:14.499326  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:15:14.520994  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:15:14.521050  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 18:15:14.540775  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:15:14.540816  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 18:15:14.560240  101831 provision.go:87] duration metric: took 628.988417ms to configureAuth
	I0522 18:15:14.560262  101831 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:15:14.560422  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:14.560469  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.576177  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.576336  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.576348  101831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:15:14.687318  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:15:14.687343  101831 ubuntu.go:71] root file system type: overlay
	I0522 18:15:14.687455  101831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:15:14.687517  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.704102  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.704323  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.704424  101831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:15:14.825449  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:15:14.825531  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.841507  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.841715  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.841741  101831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:15:14.955461  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:14.955484  101831 machine.go:97] duration metric: took 4.428121798s to provisionDockerMachine
	I0522 18:15:14.955497  101831 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:15:14.955511  101831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:15:14.955559  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:15:14.955599  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.970693  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.055854  101831 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:15:15.058722  101831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:15:15.058760  101831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:15:15.058771  101831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:15:15.058780  101831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:15:15.058789  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:15:15.058832  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:15:15.058903  101831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:15:15.058914  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:15:15.058993  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:15:15.066158  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:15.086000  101831 start.go:296] duration metric: took 130.491ms for postStartSetup
	I0522 18:15:15.086056  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:15.086093  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.101977  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.183666  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:15.187576  101831 fix.go:56] duration metric: took 4.987575013s for fixHost
	I0522 18:15:15.187597  101831 start.go:83] releasing machines lock for "ha-828033", held for 4.987611005s
	I0522 18:15:15.187662  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:15.203730  101831 ssh_runner.go:195] Run: cat /version.json
	I0522 18:15:15.203784  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.203832  101831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:15:15.203905  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.219620  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.220317  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.298438  101831 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:15.369455  101831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:15:15.373670  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:15:15.389963  101831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:15:15.390037  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:15:15.397635  101831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:15:15.397661  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.397689  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.397785  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.411498  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:15:15.419815  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:15:15.428116  101831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.428162  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:15:15.436218  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.444432  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:15:15.452463  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.460889  101831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:15:15.468598  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:15:15.476986  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:15:15.485179  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:15:15.493301  101831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:15:15.500194  101831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:15:15.506903  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:15.578809  101831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:15:15.647535  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.647580  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.647625  101831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:15:15.659341  101831 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:15:15.659408  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:15:15.670447  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.687181  101831 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:15:15.690280  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:15:15.698889  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:15:15.716155  101831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:15:15.849757  101831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:15:15.927002  101831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.927199  101831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:15:15.958682  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.035955  101831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:15:16.309267  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:15:16.319069  101831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:15:16.329406  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.338954  101831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:15:16.411316  101831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:15:16.482185  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.558123  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:15:16.569903  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.579592  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.654464  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:15:16.713660  101831 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:15:16.713739  101831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:15:16.717169  101831 start.go:562] Will wait 60s for crictl version
	I0522 18:15:16.717224  101831 ssh_runner.go:195] Run: which crictl
	I0522 18:15:16.720182  101831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:15:16.750802  101831 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:15:16.750855  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.772501  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.795663  101831 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:15:16.795751  101831 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:15:16.811580  101831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:15:16.814850  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:16.824839  101831 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:15:16.824958  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:16.825025  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.842616  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.842633  101831 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:15:16.842688  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.859091  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.859115  101831 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:15:16.859131  101831 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:15:16.859251  101831 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:15:16.859326  101831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:15:16.902852  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:16.902868  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:16.902882  101831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:15:16.902904  101831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:15:16.903073  101831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:15:16.903091  101831 kube-vip.go:115] generating kube-vip config ...
	I0522 18:15:16.903133  101831 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:15:16.913846  101831 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:16.913951  101831 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:15:16.914004  101831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:15:16.921502  101831 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:15:16.921564  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:15:16.928993  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:15:16.944153  101831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:15:16.959523  101831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:15:16.974202  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:15:16.988963  101831 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:15:16.991795  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:17.000800  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:17.079221  101831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:15:17.090798  101831 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:15:17.090820  101831 certs.go:194] generating shared ca certs ...
	I0522 18:15:17.090844  101831 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.090965  101831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:15:17.091002  101831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:15:17.091008  101831 certs.go:256] generating profile certs ...
	I0522 18:15:17.091078  101831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:15:17.091129  101831 certs.go:616] failed to parse cert file /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: x509: cannot parse IP address of length 0
	I0522 18:15:17.091199  101831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:15:17.091213  101831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:15:17.140524  101831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:15:17.140548  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140659  101831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:15:17.140670  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140730  101831 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:15:17.140925  101831 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:15:17.141101  101831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:15:17.141119  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:15:17.141133  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:15:17.141147  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:15:17.141170  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:15:17.141187  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:15:17.141204  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:15:17.141219  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:15:17.141242  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:15:17.141303  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:15:17.141346  101831 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:15:17.141359  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:15:17.141388  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:15:17.141417  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:15:17.141446  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:15:17.141496  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:17.141532  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.141552  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.141573  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.142334  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:15:17.168748  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:15:17.251949  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:15:17.279089  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:15:17.360292  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:15:17.382285  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:15:17.402361  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:15:17.422080  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:15:17.441696  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:15:17.461724  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:15:17.481252  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:15:17.500617  101831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:15:17.515028  101831 ssh_runner.go:195] Run: openssl version
	I0522 18:15:17.519598  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:15:17.527181  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530162  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530202  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.535963  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:15:17.543306  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:15:17.551068  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553913  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553960  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.559966  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:15:17.567478  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:15:17.575235  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578146  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578200  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.584135  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:15:17.591800  101831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:15:17.594551  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:15:17.600342  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:15:17.606283  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:15:17.611975  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:15:17.617679  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:15:17.623211  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:15:17.628747  101831 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:17.628861  101831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:15:17.645553  101831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:15:17.653137  101831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:15:17.653154  101831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:15:17.653158  101831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:15:17.653194  101831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:15:17.660437  101831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:17.660808  101831 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.660901  101831 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:15:17.661141  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.661490  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.661685  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.662092  101831 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:15:17.662244  101831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:15:17.669585  101831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:15:17.669601  101831 kubeadm.go:591] duration metric: took 16.438601ms to restartPrimaryControlPlane
	I0522 18:15:17.669608  101831 kubeadm.go:393] duration metric: took 40.865584ms to StartCluster
	I0522 18:15:17.669620  101831 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.669675  101831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.670178  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.670340  101831 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:15:17.670358  101831 start.go:240] waiting for startup goroutines ...
	I0522 18:15:17.670369  101831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:15:17.670406  101831 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:15:17.670424  101831 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:15:17.670437  101831 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	W0522 18:15:17.670444  101831 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:15:17.670452  101831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 18:15:17.670468  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.670519  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:17.670698  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.670784  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.689774  101831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:15:17.689555  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.691107  101831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.691126  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:15:17.691169  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.691305  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.691526  101831 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:15:17.691538  101831 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:15:17.691559  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.691847  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.710078  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.710513  101831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:17.710529  101831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:15:17.710565  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.726905  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.803514  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.818704  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:17.855350  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.855404  101831 retry.go:31] will retry after 232.813174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:17.869892  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.869918  101831 retry.go:31] will retry after 317.212878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.089255  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.139447  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.139480  101831 retry.go:31] will retry after 388.464948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.187648  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:18.237073  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.237097  101831 retry.go:31] will retry after 286.046895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.523727  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:18.528673  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.578085  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.578120  101831 retry.go:31] will retry after 730.017926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:18.580563  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.580590  101831 retry.go:31] will retry after 575.328536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.156346  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:19.207853  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.207882  101831 retry.go:31] will retry after 904.065015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.309074  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:19.360363  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.360398  101831 retry.go:31] will retry after 668.946527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.030373  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:20.081266  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.081297  101831 retry.go:31] will retry after 1.581516451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.112442  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:20.162392  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.162423  101831 retry.go:31] will retry after 799.963515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.962767  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:21.014221  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.014258  101831 retry.go:31] will retry after 2.627281568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.663009  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:21.716311  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.716340  101831 retry.go:31] will retry after 973.454643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.690502  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:22.742767  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.742794  101831 retry.go:31] will retry after 3.340789148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.641773  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:23.775204  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.775240  101831 retry.go:31] will retry after 2.671895107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.083777  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:26.134578  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.134608  101831 retry.go:31] will retry after 4.298864045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.448092  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:26.499632  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.499662  101831 retry.go:31] will retry after 5.525229223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.434210  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:30.485401  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.485428  101831 retry.go:31] will retry after 4.916959612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.025957  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:32.076991  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.077021  101831 retry.go:31] will retry after 7.245842793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.402632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:35.454254  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.454282  101831 retry.go:31] will retry after 10.414070295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.324207  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:39.375910  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.375942  101831 retry.go:31] will retry after 9.156494241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.868576  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:45.920031  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.920063  101831 retry.go:31] will retry after 14.404576525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.532789  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:48.585261  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.585294  101831 retry.go:31] will retry after 17.974490677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.325688  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:00.377854  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.377897  101831 retry.go:31] will retry after 11.577079387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.561241  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:06.612860  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.612894  101831 retry.go:31] will retry after 14.583164714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:11.956632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:12.008606  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:12.008639  101831 retry.go:31] will retry after 46.302827634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.196878  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:21.247130  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.247161  101831 retry.go:31] will retry after 25.952174169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:47.199672  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:47.251576  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:47.251667  101831 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.312157  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:58.364469  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:58.364578  101831 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.366416  101831 out.go:177] * Enabled addons: 
	I0522 18:16:58.367516  101831 addons.go:505] duration metric: took 1m40.697149813s for enable addons: enabled=[]
	I0522 18:16:58.367546  101831 start.go:245] waiting for cluster config update ...
	I0522 18:16:58.367558  101831 start.go:254] writing updated cluster config ...
	I0522 18:16:58.369066  101831 out.go:177] 
	I0522 18:16:58.370289  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:16:58.370344  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.371848  101831 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:16:58.373273  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:16:58.374502  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:16:58.375701  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:16:58.375722  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:16:58.375727  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:16:58.375816  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:16:58.375840  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:16:58.375916  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.392272  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:16:58.392290  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:16:58.392305  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:16:58.392330  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:16:58.392384  101831 start.go:364] duration metric: took 37.403µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:16:58.392400  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:16:58.392405  101831 fix.go:54] fixHost starting: m02
	I0522 18:16:58.392601  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.408748  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:16:58.408768  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:16:58.410677  101831 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:16:58.411822  101831 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:16:58.662201  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.678298  101831 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:16:58.678749  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:16:58.695431  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:16:58.695483  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:16:58.710353  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:16:58.711129  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.711158  101831 retry.go:31] will retry after 162.419442ms: ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	W0522 18:16:58.874922  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.874949  101831 retry.go:31] will retry after 374.487623ms: ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:59.335651  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:16:59.339485  101831 fix.go:56] duration metric: took 947.0745ms for fixHost
	I0522 18:16:59.339510  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 947.115875ms
	W0522 18:16:59.339525  101831 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:16:59.339587  101831 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:16:59.339604  101831 start.go:728] Will try again in 5 seconds ...
	I0522 18:17:04.343396  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:17:04.343479  101831 start.go:364] duration metric: took 52.078µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:17:04.343499  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:17:04.343506  101831 fix.go:54] fixHost starting: m02
	I0522 18:17:04.343719  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:17:04.359537  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:17:04.359560  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:17:04.361525  101831 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:17:04.362763  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:17:04.362823  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.378286  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.378448  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.378458  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:17:04.490382  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.490408  101831 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:17:04.490471  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.506007  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.506177  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.506191  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:17:04.628978  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.629058  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.645189  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.645348  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.645364  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:17:04.759139  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:17:04.759186  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:17:04.759214  101831 ubuntu.go:177] setting up certificates
	I0522 18:17:04.759235  101831 provision.go:84] configureAuth start
	I0522 18:17:04.759332  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.775834  101831 provision.go:87] duration metric: took 16.584677ms to configureAuth
	W0522 18:17:04.775854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.775873  101831 retry.go:31] will retry after 126.959µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.777009  101831 provision.go:84] configureAuth start
	I0522 18:17:04.777074  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.793126  101831 provision.go:87] duration metric: took 16.098282ms to configureAuth
	W0522 18:17:04.793147  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.793164  101831 retry.go:31] will retry after 87.815µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.794272  101831 provision.go:84] configureAuth start
	I0522 18:17:04.794339  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.810002  101831 provision.go:87] duration metric: took 15.712157ms to configureAuth
	W0522 18:17:04.810023  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.810043  101831 retry.go:31] will retry after 160.401µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.811149  101831 provision.go:84] configureAuth start
	I0522 18:17:04.811208  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.826479  101831 provision.go:87] duration metric: took 15.314201ms to configureAuth
	W0522 18:17:04.826498  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.826513  101831 retry.go:31] will retry after 419.179µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.827621  101831 provision.go:84] configureAuth start
	I0522 18:17:04.827687  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.842837  101831 provision.go:87] duration metric: took 15.198634ms to configureAuth
	W0522 18:17:04.842854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.842870  101831 retry.go:31] will retry after 333.49µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.843983  101831 provision.go:84] configureAuth start
	I0522 18:17:04.844056  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.858999  101831 provision.go:87] duration metric: took 15.001015ms to configureAuth
	W0522 18:17:04.859014  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.859029  101831 retry.go:31] will retry after 831.427µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.860145  101831 provision.go:84] configureAuth start
	I0522 18:17:04.860207  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.874679  101831 provision.go:87] duration metric: took 14.517169ms to configureAuth
	W0522 18:17:04.874696  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.874710  101831 retry.go:31] will retry after 1.617455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.876883  101831 provision.go:84] configureAuth start
	I0522 18:17:04.876932  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.891845  101831 provision.go:87] duration metric: took 14.947571ms to configureAuth
	W0522 18:17:04.891860  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.891873  101831 retry.go:31] will retry after 1.45074ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.894054  101831 provision.go:84] configureAuth start
	I0522 18:17:04.894110  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.909207  101831 provision.go:87] duration metric: took 15.132147ms to configureAuth
	W0522 18:17:04.909224  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.909239  101831 retry.go:31] will retry after 2.781453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.912374  101831 provision.go:84] configureAuth start
	I0522 18:17:04.912425  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.927102  101831 provision.go:87] duration metric: took 14.710332ms to configureAuth
	W0522 18:17:04.927120  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.927135  101831 retry.go:31] will retry after 3.086595ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.930243  101831 provision.go:84] configureAuth start
	I0522 18:17:04.930304  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.944990  101831 provision.go:87] duration metric: took 14.727208ms to configureAuth
	W0522 18:17:04.945005  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.945020  101831 retry.go:31] will retry after 8.052612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.953127  101831 provision.go:84] configureAuth start
	I0522 18:17:04.953199  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.968194  101831 provision.go:87] duration metric: took 15.047376ms to configureAuth
	W0522 18:17:04.968211  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.968235  101831 retry.go:31] will retry after 12.227939ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.981403  101831 provision.go:84] configureAuth start
	I0522 18:17:04.981475  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.996918  101831 provision.go:87] duration metric: took 15.4993ms to configureAuth
	W0522 18:17:04.996933  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.996947  101831 retry.go:31] will retry after 9.372006ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.007135  101831 provision.go:84] configureAuth start
	I0522 18:17:05.007251  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.021722  101831 provision.go:87] duration metric: took 14.570245ms to configureAuth
	W0522 18:17:05.021738  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.021751  101831 retry.go:31] will retry after 23.298276ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.045949  101831 provision.go:84] configureAuth start
	I0522 18:17:05.046030  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.062577  101831 provision.go:87] duration metric: took 16.607282ms to configureAuth
	W0522 18:17:05.062597  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.062613  101831 retry.go:31] will retry after 40.757138ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.103799  101831 provision.go:84] configureAuth start
	I0522 18:17:05.103887  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.119482  101831 provision.go:87] duration metric: took 15.655062ms to configureAuth
	W0522 18:17:05.119499  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.119516  101831 retry.go:31] will retry after 38.095973ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.158702  101831 provision.go:84] configureAuth start
	I0522 18:17:05.158788  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.174198  101831 provision.go:87] duration metric: took 15.463621ms to configureAuth
	W0522 18:17:05.174214  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.174232  101831 retry.go:31] will retry after 48.82201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.223426  101831 provision.go:84] configureAuth start
	I0522 18:17:05.223513  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.239564  101831 provision.go:87] duration metric: took 16.11307ms to configureAuth
	W0522 18:17:05.239581  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.239597  101831 retry.go:31] will retry after 136.469602ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.376897  101831 provision.go:84] configureAuth start
	I0522 18:17:05.377009  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.393537  101831 provision.go:87] duration metric: took 16.613386ms to configureAuth
	W0522 18:17:05.393558  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.393575  101831 retry.go:31] will retry after 161.82385ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.555925  101831 provision.go:84] configureAuth start
	I0522 18:17:05.556033  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.572787  101831 provision.go:87] duration metric: took 16.830217ms to configureAuth
	W0522 18:17:05.572804  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.572824  101831 retry.go:31] will retry after 213.087725ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.786136  101831 provision.go:84] configureAuth start
	I0522 18:17:05.786249  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.801903  101831 provision.go:87] duration metric: took 15.735371ms to configureAuth
	W0522 18:17:05.801919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.801935  101831 retry.go:31] will retry after 367.249953ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.169404  101831 provision.go:84] configureAuth start
	I0522 18:17:06.169504  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.186269  101831 provision.go:87] duration metric: took 16.837758ms to configureAuth
	W0522 18:17:06.186288  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.186306  101831 retry.go:31] will retry after 668.860958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.856116  101831 provision.go:84] configureAuth start
	I0522 18:17:06.856211  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.872110  101831 provision.go:87] duration metric: took 15.968481ms to configureAuth
	W0522 18:17:06.872130  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.872145  101831 retry.go:31] will retry after 1.080057807s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.952333  101831 provision.go:84] configureAuth start
	I0522 18:17:07.952446  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:07.969099  101831 provision.go:87] duration metric: took 16.737681ms to configureAuth
	W0522 18:17:07.969119  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.969136  101831 retry.go:31] will retry after 1.35549681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.325582  101831 provision.go:84] configureAuth start
	I0522 18:17:09.325692  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:09.341763  101831 provision.go:87] duration metric: took 16.155925ms to configureAuth
	W0522 18:17:09.341780  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.341798  101831 retry.go:31] will retry after 1.897886244s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.240016  101831 provision.go:84] configureAuth start
	I0522 18:17:11.240140  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:11.257072  101831 provision.go:87] duration metric: took 17.02632ms to configureAuth
	W0522 18:17:11.257092  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.257114  101831 retry.go:31] will retry after 2.810888271s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.070011  101831 provision.go:84] configureAuth start
	I0522 18:17:14.070113  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:14.085901  101831 provision.go:87] duration metric: took 15.848159ms to configureAuth
	W0522 18:17:14.085919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.085935  101831 retry.go:31] will retry after 4.662344732s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.748720  101831 provision.go:84] configureAuth start
	I0522 18:17:18.748845  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:18.765467  101831 provision.go:87] duration metric: took 16.701835ms to configureAuth
	W0522 18:17:18.765486  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.765504  101831 retry.go:31] will retry after 3.216983163s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:21.983872  101831 provision.go:84] configureAuth start
	I0522 18:17:21.983984  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:22.000235  101831 provision.go:87] duration metric: took 16.33158ms to configureAuth
	W0522 18:17:22.000253  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:22.000269  101831 retry.go:31] will retry after 5.251668241s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.253805  101831 provision.go:84] configureAuth start
	I0522 18:17:27.253896  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:27.270555  101831 provision.go:87] duration metric: took 16.716068ms to configureAuth
	W0522 18:17:27.270575  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.270593  101831 retry.go:31] will retry after 7.113433713s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.388102  101831 provision.go:84] configureAuth start
	I0522 18:17:34.388187  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:34.404845  101831 provision.go:87] duration metric: took 16.712516ms to configureAuth
	W0522 18:17:34.404862  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.404878  101831 retry.go:31] will retry after 14.943192814s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.349248  101831 provision.go:84] configureAuth start
	I0522 18:17:49.349327  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:49.365985  101831 provision.go:87] duration metric: took 16.710371ms to configureAuth
	W0522 18:17:49.366002  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.366018  101831 retry.go:31] will retry after 20.509395565s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.875559  101831 provision.go:84] configureAuth start
	I0522 18:18:09.875637  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:09.892771  101831 provision.go:87] duration metric: took 17.18443ms to configureAuth
	W0522 18:18:09.892792  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.892808  101831 retry.go:31] will retry after 43.941504091s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.837442  101831 provision.go:84] configureAuth start
	I0522 18:18:53.837525  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:53.854156  101831 provision.go:87] duration metric: took 16.677406ms to configureAuth
	W0522 18:18:53.854181  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854199  101831 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854204  101831 machine.go:97] duration metric: took 1m49.491432011s to provisionDockerMachine
	I0522 18:18:53.854270  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:18:53.854308  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:18:53.869467  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:18:53.955836  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:18:53.959906  101831 fix.go:56] duration metric: took 1m49.616394756s for fixHost
	I0522 18:18:53.959927  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m49.61643748s
	W0522 18:18:53.960003  101831 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.962122  101831 out.go:177] 
	W0522 18:18:53.963599  101831 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:18:53.963614  101831 out.go:239] * 
	W0522 18:18:53.964392  101831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:18:53.965343  101831 out.go:177] 
	
	
	==> Docker <==
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7621536a2355b2ed17fd4826a46eb34353e1722d46121f4d8dce21cf104fbc3b/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a7d8f14facc121954daf7040ecb42f0057a6d74fba080c60250d0c9b989d2dfd/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/169a6c9879eda81053b206f012ab25b5f0eab53a63140c4df4ccf50c3bf4f0a8/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf110b30ae61d12f067b4860abfb748b3ff223ad9c7997058c44f608448355f5/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:23 ha-828033 cri-dockerd[1181]: time="2024-05-22T18:15:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6e7d4995ae7f40c29c41768b1646800c9d56bf16def7edda6675463502dc5789/resolv.conf as [nameserver 192.168.49.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:15:24 ha-828033 dockerd[952]: time="2024-05-22T18:15:24.393467012Z" level=info msg="ignoring event" container=99d2c0c3cbaaf9c3094945d15fbe7995850de5fe0f8215e33718701064ccca2f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:35 ha-828033 dockerd[952]: time="2024-05-22T18:15:35.054053701Z" level=info msg="ignoring event" container=2d0a6ba7a450da81bb16bc8444c168516f57535d780754ce5af0d172617d2e8f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:15:43 ha-828033 dockerd[952]: time="2024-05-22T18:15:43.326912988Z" level=info msg="ignoring event" container=b914a7a4842a45e3ccfddeaa77ddd5c83dc42be0332e2dd7aeb910b171c45311 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:05 ha-828033 dockerd[952]: time="2024-05-22T18:16:05.910236913Z" level=info msg="ignoring event" container=a14905099cdd7b0890af07bfa6aa108458a0f47f512d250892828479545eb84d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:06 ha-828033 dockerd[952]: time="2024-05-22T18:16:06.328112357Z" level=info msg="ignoring event" container=3c07ff06f6142b2c6755fab16a43b9429a3ce820e788dd8dd5771c15e0e8204a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:44 ha-828033 dockerd[952]: time="2024-05-22T18:16:44.846118227Z" level=info msg="ignoring event" container=599792a4e3b530d1362c8ff4422680844fe90e1954f97299ed1ef13e8a71ddd0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:16:51 ha-828033 dockerd[952]: time="2024-05-22T18:16:51.325933990Z" level=info msg="ignoring event" container=4201938c43072029791dd84316a9daa6974d688f5001ca4319de67fe458d1ffb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:17:41 ha-828033 dockerd[952]: time="2024-05-22T18:17:41.181907947Z" level=info msg="ignoring event" container=9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:18:20 ha-828033 dockerd[952]: time="2024-05-22T18:18:20.328644913Z" level=info msg="ignoring event" container=ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ab6eec742dda2       91be940803172                                                                                         36 seconds ago       Exited              kube-apiserver            10                  169a6c9879eda       kube-apiserver-ha-828033
	9df3be5b44482       25a1387cdab82                                                                                         About a minute ago   Exited              kube-controller-manager   8                   6e7d4995ae7f4       kube-controller-manager-ha-828033
	0d8fa2694d165       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   bf110b30ae61d       kube-vip-ha-828033
	a3b9aabcf43d5       a52dc94f0a912                                                                                         3 minutes ago        Running             kube-scheduler            2                   7621536a2355b       kube-scheduler-ha-828033
	237edba91c861       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      2                   a7d8f14facc12       etcd-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         7 minutes ago        Exited              kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              25 minutes ago       Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         25 minutes ago       Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         25 minutes ago       Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     25 minutes ago       Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:18:56.431810    3747 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [237edba91c86] <==
	{"level":"info","ts":"2024-05-22T18:15:24.171999Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:15:24.172115Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-22T18:15:24.172313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-22T18:15:24.172381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:15:24.172464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.172492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.175121Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:15:24.17563Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:15:24.175673Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:15:24.175795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:24.175803Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:25.561806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.562871Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:15:25.562879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.562911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.563078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.563101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.564753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:15:25.564849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:15:08.835661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:15:08.835741Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:15:08.835877Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.837589Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83762Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:15:08.837698Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:15:08.8412Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841311Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841321Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:18:56 up  1:01,  0 users,  load average: 0.10, 0.21, 0.35
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ab6eec742dda] <==
	I0522 18:18:20.314736       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:18:20.315573       1 server.go:148] Version: v1.30.1
	I0522 18:18:20.315620       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:18:20.316026       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [9df3be5b4448] <==
	I0522 18:17:30.673877       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:17:31.149959       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:17:31.149983       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:17:31.151310       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:17:31.151319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:17:31.151615       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:17:31.151721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:17:41.152974       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:08.822760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.822810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.835649       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0522 18:15:08.835866       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a3b9aabcf43d] <==
	E0522 18:18:11.642475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:11.835721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:11.835780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:14.623604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:14.623670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:16.437879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:16.437942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:21.354024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:21.354088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:22.872425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:22.872466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:24.992452       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:24.992522       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:25.605464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:25.605527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:32.504648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:32.504690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:33.956300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:33.956359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:38.236258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:38.236301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:51.929565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:51.929609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:54.447379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:54.447426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kubelet <==
	May 22 18:18:29 ha-828033 kubelet[1391]: I0522 18:18:29.183696    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:29 ha-828033 kubelet[1391]: E0522 18:18:29.184271    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:31 ha-828033 kubelet[1391]: I0522 18:18:31.546287    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759596    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759605    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:36 ha-828033 kubelet[1391]: W0522 18:18:36.831650    1391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:36 ha-828033 kubelet[1391]: E0522 18:18:36.831737    1391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:37 ha-828033 kubelet[1391]: E0522 18:18:37.277567    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:39 ha-828033 kubelet[1391]: E0522 18:18:39.903642    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:40 ha-828033 kubelet[1391]: I0522 18:18:40.761060    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:41 ha-828033 kubelet[1391]: I0522 18:18:41.183405    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:41 ha-828033 kubelet[1391]: E0522 18:18:41.183871    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:42 ha-828033 kubelet[1391]: I0522 18:18:42.183526    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.183890    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975535    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975547    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:47 ha-828033 kubelet[1391]: E0522 18:18:47.278535    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:49 ha-828033 kubelet[1391]: I0522 18:18:49.976988    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191597    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191602    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191623    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:53 ha-828033 kubelet[1391]: I0522 18:18:53.183075    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:53 ha-828033 kubelet[1391]: E0522 18:18:53.183526    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:55 ha-828033 kubelet[1391]: I0522 18:18:55.182931    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:55 ha-828033 kubelet[1391]: E0522 18:18:55.183355    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (247.004187ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-828033 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p ha-828033 --control-plane -v=7 --alsologtostderr: exit status 50 (126.502109ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:18:57.035076  109348 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:18:57.035352  109348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:18:57.035362  109348 out.go:304] Setting ErrFile to fd 2...
	I0522 18:18:57.035367  109348 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:18:57.035599  109348 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:18:57.035874  109348 mustload.go:65] Loading cluster: ha-828033
	I0522 18:18:57.036250  109348 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:18:57.036642  109348 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:18:57.053300  109348 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:18:57.053599  109348 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:18:57.101809  109348 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:18:57.093495568 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:18:57.102153  109348 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:18:57.117685  109348 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:18:57.119989  109348 out.go:177] 
	W0522 18:18:57.121334  109348 out.go:239] X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	X Exiting due to DRV_CP_ENDPOINT: Unable to get control-plane node ha-828033-m02 endpoint: failed to lookup ip for ""
	W0522 18:18:57.121378  109348 out.go:239] * Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	* Suggestion: 
	
	    Recreate the cluster by running:
	    minikube delete <no value>
	    minikube start <no value>
	I0522 18:18:57.122660  109348 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:607: failed to add control-plane node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 node add -p ha-828033 --control-plane -v=7 --alsologtostderr" : exit status 50
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:15:10.483132321Z",
	            "FinishedAt": "2024-05-22T18:15:09.116884079Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e612b115b826e3419d82d7b81443bb337ae8736fcd5da15e19129972417863e7",
	            "SandboxKey": "/var/run/docker/netns/e612b115b826",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "e2ea99d68522c5a32290bcf1c36c6f217acb3d5d61a816c7582d4e1903563b0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (243.335516ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-828033 stop -v=7                                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC | 22 May 24 18:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true                                                         | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=docker                                                       |           |         |         |                     |                     |
	| node    | add -p ha-828033                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:18 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:15:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:15:10.052711  101831 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:10.052945  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.052953  101831 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:10.052957  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.053112  101831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:10.053580  101831 out.go:298] Setting JSON to false
	I0522 18:15:10.054415  101831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3454,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:15:10.054471  101831 start.go:139] virtualization: kvm guest
	I0522 18:15:10.056675  101831 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:15:10.058040  101831 notify.go:220] Checking for updates...
	I0522 18:15:10.058046  101831 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:15:10.059343  101831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:15:10.060677  101831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:10.061800  101831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:15:10.062877  101831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:15:10.064091  101831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:15:10.065687  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:10.066119  101831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:15:10.086670  101831 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:15:10.086771  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.130648  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.122350286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.130754  101831 docker.go:295] overlay module found
	I0522 18:15:10.132447  101831 out.go:177] * Using the docker driver based on existing profile
	I0522 18:15:10.133511  101831 start.go:297] selected driver: docker
	I0522 18:15:10.133528  101831 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.133615  101831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:15:10.133693  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.178797  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.170730392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.179465  101831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:15:10.179495  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:10.179504  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:10.179557  101831 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.181838  101831 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:15:10.182862  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:15:10.184066  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:15:10.185142  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:10.185165  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:15:10.185172  101831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:15:10.185187  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:15:10.185275  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:15:10.185286  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:15:10.185372  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.199839  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:15:10.199866  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:15:10.199888  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:15:10.199920  101831 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:15:10.199975  101831 start.go:364] duration metric: took 36.63µs to acquireMachinesLock for "ha-828033"
	I0522 18:15:10.199991  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:15:10.200001  101831 fix.go:54] fixHost starting: 
	I0522 18:15:10.200212  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.216528  101831 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:15:10.216569  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:15:10.218337  101831 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:15:10.219502  101831 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:15:10.489901  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.507723  101831 kic.go:430] container "ha-828033" state is running.
	I0522 18:15:10.508126  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:10.527137  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.527348  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:15:10.527408  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:10.544792  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:10.545081  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:10.545103  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:15:10.545690  101831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:32817: read: connection reset by peer
	I0522 18:15:13.662862  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.662903  101831 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:15:13.662964  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.679655  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.679834  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.679848  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:15:13.801105  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.801184  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.817648  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.817828  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.817845  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:15:13.931153  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:13.931179  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:15:13.931217  101831 ubuntu.go:177] setting up certificates
	I0522 18:15:13.931238  101831 provision.go:84] configureAuth start
	I0522 18:15:13.931311  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:13.947388  101831 provision.go:143] copyHostCerts
	I0522 18:15:13.947420  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947445  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:15:13.947460  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947524  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:15:13.947607  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947625  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:15:13.947628  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947654  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:15:13.947696  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947711  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:15:13.947717  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947737  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:15:13.947784  101831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:15:14.398357  101831 provision.go:177] copyRemoteCerts
	I0522 18:15:14.398411  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:15:14.398442  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.414166  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:14.499249  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:15:14.499326  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:15:14.520994  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:15:14.521050  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 18:15:14.540775  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:15:14.540816  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 18:15:14.560240  101831 provision.go:87] duration metric: took 628.988417ms to configureAuth
	I0522 18:15:14.560262  101831 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:15:14.560422  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:14.560469  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.576177  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.576336  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.576348  101831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:15:14.687318  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:15:14.687343  101831 ubuntu.go:71] root file system type: overlay
	I0522 18:15:14.687455  101831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:15:14.687517  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.704102  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.704323  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.704424  101831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:15:14.825449  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:15:14.825531  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.841507  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.841715  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.841741  101831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:15:14.955461  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:14.955484  101831 machine.go:97] duration metric: took 4.428121798s to provisionDockerMachine
	I0522 18:15:14.955497  101831 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:15:14.955511  101831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:15:14.955559  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:15:14.955599  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.970693  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.055854  101831 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:15:15.058722  101831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:15:15.058760  101831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:15:15.058771  101831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:15:15.058780  101831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:15:15.058789  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:15:15.058832  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:15:15.058903  101831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:15:15.058914  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:15:15.058993  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:15:15.066158  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:15.086000  101831 start.go:296] duration metric: took 130.491ms for postStartSetup
	I0522 18:15:15.086056  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:15.086093  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.101977  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.183666  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:15.187576  101831 fix.go:56] duration metric: took 4.987575013s for fixHost
	I0522 18:15:15.187597  101831 start.go:83] releasing machines lock for "ha-828033", held for 4.987611005s
	I0522 18:15:15.187662  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:15.203730  101831 ssh_runner.go:195] Run: cat /version.json
	I0522 18:15:15.203784  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.203832  101831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:15:15.203905  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.219620  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.220317  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.298438  101831 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:15.369455  101831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:15:15.373670  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:15:15.389963  101831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:15:15.390037  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:15:15.397635  101831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:15:15.397661  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.397689  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.397785  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.411498  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:15:15.419815  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:15:15.428116  101831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.428162  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:15:15.436218  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.444432  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:15:15.452463  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.460889  101831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:15:15.468598  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:15:15.476986  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:15:15.485179  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:15:15.493301  101831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:15:15.500194  101831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:15:15.506903  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:15.578809  101831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:15:15.647535  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.647580  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.647625  101831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:15:15.659341  101831 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:15:15.659408  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:15:15.670447  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.687181  101831 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:15:15.690280  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:15:15.698889  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:15:15.716155  101831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:15:15.849757  101831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:15:15.927002  101831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.927199  101831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:15:15.958682  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.035955  101831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:15:16.309267  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:15:16.319069  101831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:15:16.329406  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.338954  101831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:15:16.411316  101831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:15:16.482185  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.558123  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:15:16.569903  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.579592  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.654464  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:15:16.713660  101831 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:15:16.713739  101831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:15:16.717169  101831 start.go:562] Will wait 60s for crictl version
	I0522 18:15:16.717224  101831 ssh_runner.go:195] Run: which crictl
	I0522 18:15:16.720182  101831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:15:16.750802  101831 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:15:16.750855  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.772501  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.795663  101831 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:15:16.795751  101831 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:15:16.811580  101831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:15:16.814850  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:16.824839  101831 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:15:16.824958  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:16.825025  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.842616  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.842633  101831 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:15:16.842688  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.859091  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.859115  101831 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:15:16.859131  101831 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:15:16.859251  101831 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:15:16.859326  101831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:15:16.902852  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:16.902868  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:16.902882  101831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:15:16.902904  101831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:15:16.903073  101831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:15:16.903091  101831 kube-vip.go:115] generating kube-vip config ...
	I0522 18:15:16.903133  101831 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:15:16.913846  101831 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:16.913951  101831 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:15:16.914004  101831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:15:16.921502  101831 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:15:16.921564  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:15:16.928993  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:15:16.944153  101831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:15:16.959523  101831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:15:16.974202  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:15:16.988963  101831 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:15:16.991795  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:17.000800  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:17.079221  101831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:15:17.090798  101831 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:15:17.090820  101831 certs.go:194] generating shared ca certs ...
	I0522 18:15:17.090844  101831 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.090965  101831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:15:17.091002  101831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:15:17.091008  101831 certs.go:256] generating profile certs ...
	I0522 18:15:17.091078  101831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:15:17.091129  101831 certs.go:616] failed to parse cert file /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: x509: cannot parse IP address of length 0
	I0522 18:15:17.091199  101831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:15:17.091213  101831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:15:17.140524  101831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:15:17.140548  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140659  101831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:15:17.140670  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140730  101831 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:15:17.140925  101831 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:15:17.141101  101831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:15:17.141119  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:15:17.141133  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:15:17.141147  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:15:17.141170  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:15:17.141187  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:15:17.141204  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:15:17.141219  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:15:17.141242  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:15:17.141303  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:15:17.141346  101831 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:15:17.141359  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:15:17.141388  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:15:17.141417  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:15:17.141446  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:15:17.141496  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:17.141532  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.141552  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.141573  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.142334  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:15:17.168748  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:15:17.251949  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:15:17.279089  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:15:17.360292  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:15:17.382285  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:15:17.402361  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:15:17.422080  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:15:17.441696  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:15:17.461724  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:15:17.481252  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:15:17.500617  101831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:15:17.515028  101831 ssh_runner.go:195] Run: openssl version
	I0522 18:15:17.519598  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:15:17.527181  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530162  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530202  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.535963  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:15:17.543306  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:15:17.551068  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553913  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553960  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.559966  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:15:17.567478  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:15:17.575235  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578146  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578200  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.584135  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:15:17.591800  101831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:15:17.594551  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:15:17.600342  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:15:17.606283  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:15:17.611975  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:15:17.617679  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:15:17.623211  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:15:17.628747  101831 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:17.628861  101831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:15:17.645553  101831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:15:17.653137  101831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:15:17.653154  101831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:15:17.653158  101831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:15:17.653194  101831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:15:17.660437  101831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:17.660808  101831 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.660901  101831 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:15:17.661141  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.661490  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.661685  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.662092  101831 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:15:17.662244  101831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:15:17.669585  101831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:15:17.669601  101831 kubeadm.go:591] duration metric: took 16.438601ms to restartPrimaryControlPlane
	I0522 18:15:17.669608  101831 kubeadm.go:393] duration metric: took 40.865584ms to StartCluster
	I0522 18:15:17.669620  101831 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.669675  101831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.670178  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.670340  101831 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:15:17.670358  101831 start.go:240] waiting for startup goroutines ...
	I0522 18:15:17.670369  101831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:15:17.670406  101831 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:15:17.670424  101831 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:15:17.670437  101831 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	W0522 18:15:17.670444  101831 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:15:17.670452  101831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 18:15:17.670468  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.670519  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:17.670698  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.670784  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.689774  101831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:15:17.689555  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.691107  101831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.691126  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:15:17.691169  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.691305  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.691526  101831 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:15:17.691538  101831 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:15:17.691559  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.691847  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.710078  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.710513  101831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:17.710529  101831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:15:17.710565  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.726905  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.803514  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.818704  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:17.855350  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.855404  101831 retry.go:31] will retry after 232.813174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:17.869892  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.869918  101831 retry.go:31] will retry after 317.212878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.089255  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.139447  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.139480  101831 retry.go:31] will retry after 388.464948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.187648  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:18.237073  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.237097  101831 retry.go:31] will retry after 286.046895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.523727  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:18.528673  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.578085  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.578120  101831 retry.go:31] will retry after 730.017926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:18.580563  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.580590  101831 retry.go:31] will retry after 575.328536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.156346  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:19.207853  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.207882  101831 retry.go:31] will retry after 904.065015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.309074  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:19.360363  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.360398  101831 retry.go:31] will retry after 668.946527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.030373  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:20.081266  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.081297  101831 retry.go:31] will retry after 1.581516451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.112442  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:20.162392  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.162423  101831 retry.go:31] will retry after 799.963515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.962767  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:21.014221  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.014258  101831 retry.go:31] will retry after 2.627281568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.663009  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:21.716311  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.716340  101831 retry.go:31] will retry after 973.454643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.690502  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:22.742767  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.742794  101831 retry.go:31] will retry after 3.340789148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.641773  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:23.775204  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.775240  101831 retry.go:31] will retry after 2.671895107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.083777  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:26.134578  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.134608  101831 retry.go:31] will retry after 4.298864045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.448092  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:26.499632  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.499662  101831 retry.go:31] will retry after 5.525229223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.434210  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:30.485401  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.485428  101831 retry.go:31] will retry after 4.916959612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.025957  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:32.076991  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.077021  101831 retry.go:31] will retry after 7.245842793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.402632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:35.454254  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.454282  101831 retry.go:31] will retry after 10.414070295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.324207  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:39.375910  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.375942  101831 retry.go:31] will retry after 9.156494241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.868576  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:45.920031  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.920063  101831 retry.go:31] will retry after 14.404576525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.532789  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:48.585261  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.585294  101831 retry.go:31] will retry after 17.974490677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.325688  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:00.377854  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.377897  101831 retry.go:31] will retry after 11.577079387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.561241  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:06.612860  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.612894  101831 retry.go:31] will retry after 14.583164714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:11.956632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:12.008606  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:12.008639  101831 retry.go:31] will retry after 46.302827634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.196878  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:21.247130  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.247161  101831 retry.go:31] will retry after 25.952174169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:47.199672  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:47.251576  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:47.251667  101831 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.312157  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:58.364469  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:58.364578  101831 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.366416  101831 out.go:177] * Enabled addons: 
	I0522 18:16:58.367516  101831 addons.go:505] duration metric: took 1m40.697149813s for enable addons: enabled=[]
	I0522 18:16:58.367546  101831 start.go:245] waiting for cluster config update ...
	I0522 18:16:58.367558  101831 start.go:254] writing updated cluster config ...
	I0522 18:16:58.369066  101831 out.go:177] 
	I0522 18:16:58.370289  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:16:58.370344  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.371848  101831 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:16:58.373273  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:16:58.374502  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:16:58.375701  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:16:58.375722  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:16:58.375727  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:16:58.375816  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:16:58.375840  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:16:58.375916  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.392272  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:16:58.392290  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:16:58.392305  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:16:58.392330  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:16:58.392384  101831 start.go:364] duration metric: took 37.403µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:16:58.392400  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:16:58.392405  101831 fix.go:54] fixHost starting: m02
	I0522 18:16:58.392601  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.408748  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:16:58.408768  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:16:58.410677  101831 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:16:58.411822  101831 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:16:58.662201  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.678298  101831 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:16:58.678749  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:16:58.695431  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:16:58.695483  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:16:58.710353  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:16:58.711129  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.711158  101831 retry.go:31] will retry after 162.419442ms: ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	W0522 18:16:58.874922  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.874949  101831 retry.go:31] will retry after 374.487623ms: ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:59.335651  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:16:59.339485  101831 fix.go:56] duration metric: took 947.0745ms for fixHost
	I0522 18:16:59.339510  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 947.115875ms
	W0522 18:16:59.339525  101831 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:16:59.339587  101831 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:16:59.339604  101831 start.go:728] Will try again in 5 seconds ...
	I0522 18:17:04.343396  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:17:04.343479  101831 start.go:364] duration metric: took 52.078µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:17:04.343499  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:17:04.343506  101831 fix.go:54] fixHost starting: m02
	I0522 18:17:04.343719  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:17:04.359537  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:17:04.359560  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:17:04.361525  101831 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:17:04.362763  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:17:04.362823  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.378286  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.378448  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.378458  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:17:04.490382  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.490408  101831 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:17:04.490471  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.506007  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.506177  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.506191  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:17:04.628978  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.629058  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.645189  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.645348  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.645364  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:17:04.759139  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:17:04.759186  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:17:04.759214  101831 ubuntu.go:177] setting up certificates
	I0522 18:17:04.759235  101831 provision.go:84] configureAuth start
	I0522 18:17:04.759332  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.775834  101831 provision.go:87] duration metric: took 16.584677ms to configureAuth
	W0522 18:17:04.775854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.775873  101831 retry.go:31] will retry after 126.959µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.777009  101831 provision.go:84] configureAuth start
	I0522 18:17:04.777074  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.793126  101831 provision.go:87] duration metric: took 16.098282ms to configureAuth
	W0522 18:17:04.793147  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.793164  101831 retry.go:31] will retry after 87.815µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.794272  101831 provision.go:84] configureAuth start
	I0522 18:17:04.794339  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.810002  101831 provision.go:87] duration metric: took 15.712157ms to configureAuth
	W0522 18:17:04.810023  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.810043  101831 retry.go:31] will retry after 160.401µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.811149  101831 provision.go:84] configureAuth start
	I0522 18:17:04.811208  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.826479  101831 provision.go:87] duration metric: took 15.314201ms to configureAuth
	W0522 18:17:04.826498  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.826513  101831 retry.go:31] will retry after 419.179µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.827621  101831 provision.go:84] configureAuth start
	I0522 18:17:04.827687  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.842837  101831 provision.go:87] duration metric: took 15.198634ms to configureAuth
	W0522 18:17:04.842854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.842870  101831 retry.go:31] will retry after 333.49µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.843983  101831 provision.go:84] configureAuth start
	I0522 18:17:04.844056  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.858999  101831 provision.go:87] duration metric: took 15.001015ms to configureAuth
	W0522 18:17:04.859014  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.859029  101831 retry.go:31] will retry after 831.427µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.860145  101831 provision.go:84] configureAuth start
	I0522 18:17:04.860207  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.874679  101831 provision.go:87] duration metric: took 14.517169ms to configureAuth
	W0522 18:17:04.874696  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.874710  101831 retry.go:31] will retry after 1.617455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.876883  101831 provision.go:84] configureAuth start
	I0522 18:17:04.876932  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.891845  101831 provision.go:87] duration metric: took 14.947571ms to configureAuth
	W0522 18:17:04.891860  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.891873  101831 retry.go:31] will retry after 1.45074ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.894054  101831 provision.go:84] configureAuth start
	I0522 18:17:04.894110  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.909207  101831 provision.go:87] duration metric: took 15.132147ms to configureAuth
	W0522 18:17:04.909224  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.909239  101831 retry.go:31] will retry after 2.781453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.912374  101831 provision.go:84] configureAuth start
	I0522 18:17:04.912425  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.927102  101831 provision.go:87] duration metric: took 14.710332ms to configureAuth
	W0522 18:17:04.927120  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.927135  101831 retry.go:31] will retry after 3.086595ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.930243  101831 provision.go:84] configureAuth start
	I0522 18:17:04.930304  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.944990  101831 provision.go:87] duration metric: took 14.727208ms to configureAuth
	W0522 18:17:04.945005  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.945020  101831 retry.go:31] will retry after 8.052612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.953127  101831 provision.go:84] configureAuth start
	I0522 18:17:04.953199  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.968194  101831 provision.go:87] duration metric: took 15.047376ms to configureAuth
	W0522 18:17:04.968211  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.968235  101831 retry.go:31] will retry after 12.227939ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.981403  101831 provision.go:84] configureAuth start
	I0522 18:17:04.981475  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.996918  101831 provision.go:87] duration metric: took 15.4993ms to configureAuth
	W0522 18:17:04.996933  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.996947  101831 retry.go:31] will retry after 9.372006ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.007135  101831 provision.go:84] configureAuth start
	I0522 18:17:05.007251  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.021722  101831 provision.go:87] duration metric: took 14.570245ms to configureAuth
	W0522 18:17:05.021738  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.021751  101831 retry.go:31] will retry after 23.298276ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.045949  101831 provision.go:84] configureAuth start
	I0522 18:17:05.046030  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.062577  101831 provision.go:87] duration metric: took 16.607282ms to configureAuth
	W0522 18:17:05.062597  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.062613  101831 retry.go:31] will retry after 40.757138ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.103799  101831 provision.go:84] configureAuth start
	I0522 18:17:05.103887  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.119482  101831 provision.go:87] duration metric: took 15.655062ms to configureAuth
	W0522 18:17:05.119499  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.119516  101831 retry.go:31] will retry after 38.095973ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.158702  101831 provision.go:84] configureAuth start
	I0522 18:17:05.158788  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.174198  101831 provision.go:87] duration metric: took 15.463621ms to configureAuth
	W0522 18:17:05.174214  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.174232  101831 retry.go:31] will retry after 48.82201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.223426  101831 provision.go:84] configureAuth start
	I0522 18:17:05.223513  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.239564  101831 provision.go:87] duration metric: took 16.11307ms to configureAuth
	W0522 18:17:05.239581  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.239597  101831 retry.go:31] will retry after 136.469602ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.376897  101831 provision.go:84] configureAuth start
	I0522 18:17:05.377009  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.393537  101831 provision.go:87] duration metric: took 16.613386ms to configureAuth
	W0522 18:17:05.393558  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.393575  101831 retry.go:31] will retry after 161.82385ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.555925  101831 provision.go:84] configureAuth start
	I0522 18:17:05.556033  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.572787  101831 provision.go:87] duration metric: took 16.830217ms to configureAuth
	W0522 18:17:05.572804  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.572824  101831 retry.go:31] will retry after 213.087725ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.786136  101831 provision.go:84] configureAuth start
	I0522 18:17:05.786249  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.801903  101831 provision.go:87] duration metric: took 15.735371ms to configureAuth
	W0522 18:17:05.801919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.801935  101831 retry.go:31] will retry after 367.249953ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.169404  101831 provision.go:84] configureAuth start
	I0522 18:17:06.169504  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.186269  101831 provision.go:87] duration metric: took 16.837758ms to configureAuth
	W0522 18:17:06.186288  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.186306  101831 retry.go:31] will retry after 668.860958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.856116  101831 provision.go:84] configureAuth start
	I0522 18:17:06.856211  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.872110  101831 provision.go:87] duration metric: took 15.968481ms to configureAuth
	W0522 18:17:06.872130  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.872145  101831 retry.go:31] will retry after 1.080057807s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.952333  101831 provision.go:84] configureAuth start
	I0522 18:17:07.952446  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:07.969099  101831 provision.go:87] duration metric: took 16.737681ms to configureAuth
	W0522 18:17:07.969119  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.969136  101831 retry.go:31] will retry after 1.35549681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.325582  101831 provision.go:84] configureAuth start
	I0522 18:17:09.325692  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:09.341763  101831 provision.go:87] duration metric: took 16.155925ms to configureAuth
	W0522 18:17:09.341780  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.341798  101831 retry.go:31] will retry after 1.897886244s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.240016  101831 provision.go:84] configureAuth start
	I0522 18:17:11.240140  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:11.257072  101831 provision.go:87] duration metric: took 17.02632ms to configureAuth
	W0522 18:17:11.257092  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.257114  101831 retry.go:31] will retry after 2.810888271s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.070011  101831 provision.go:84] configureAuth start
	I0522 18:17:14.070113  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:14.085901  101831 provision.go:87] duration metric: took 15.848159ms to configureAuth
	W0522 18:17:14.085919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.085935  101831 retry.go:31] will retry after 4.662344732s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.748720  101831 provision.go:84] configureAuth start
	I0522 18:17:18.748845  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:18.765467  101831 provision.go:87] duration metric: took 16.701835ms to configureAuth
	W0522 18:17:18.765486  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.765504  101831 retry.go:31] will retry after 3.216983163s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:21.983872  101831 provision.go:84] configureAuth start
	I0522 18:17:21.983984  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:22.000235  101831 provision.go:87] duration metric: took 16.33158ms to configureAuth
	W0522 18:17:22.000253  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:22.000269  101831 retry.go:31] will retry after 5.251668241s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.253805  101831 provision.go:84] configureAuth start
	I0522 18:17:27.253896  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:27.270555  101831 provision.go:87] duration metric: took 16.716068ms to configureAuth
	W0522 18:17:27.270575  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.270593  101831 retry.go:31] will retry after 7.113433713s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.388102  101831 provision.go:84] configureAuth start
	I0522 18:17:34.388187  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:34.404845  101831 provision.go:87] duration metric: took 16.712516ms to configureAuth
	W0522 18:17:34.404862  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.404878  101831 retry.go:31] will retry after 14.943192814s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.349248  101831 provision.go:84] configureAuth start
	I0522 18:17:49.349327  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:49.365985  101831 provision.go:87] duration metric: took 16.710371ms to configureAuth
	W0522 18:17:49.366002  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.366018  101831 retry.go:31] will retry after 20.509395565s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.875559  101831 provision.go:84] configureAuth start
	I0522 18:18:09.875637  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:09.892771  101831 provision.go:87] duration metric: took 17.18443ms to configureAuth
	W0522 18:18:09.892792  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.892808  101831 retry.go:31] will retry after 43.941504091s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.837442  101831 provision.go:84] configureAuth start
	I0522 18:18:53.837525  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:53.854156  101831 provision.go:87] duration metric: took 16.677406ms to configureAuth
	W0522 18:18:53.854181  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854199  101831 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854204  101831 machine.go:97] duration metric: took 1m49.491432011s to provisionDockerMachine
	I0522 18:18:53.854270  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:18:53.854308  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:18:53.869467  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:18:53.955836  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:18:53.959906  101831 fix.go:56] duration metric: took 1m49.616394756s for fixHost
	I0522 18:18:53.959927  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m49.61643748s
	W0522 18:18:53.960003  101831 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.962122  101831 out.go:177] 
	W0522 18:18:53.963599  101831 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:18:53.963614  101831 out.go:239] * 
	W0522 18:18:53.964392  101831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:18:53.965343  101831 out.go:177] 
	
	
	==> Docker <==
	May 22 18:16:51 ha-828033 dockerd[952]: time="2024-05-22T18:16:51.325933990Z" level=info msg="ignoring event" container=4201938c43072029791dd84316a9daa6974d688f5001ca4319de67fe458d1ffb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:17:41 ha-828033 dockerd[952]: time="2024-05-22T18:17:41.181907947Z" level=info msg="ignoring event" container=9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:18:20 ha-828033 dockerd[952]: time="2024-05-22T18:18:20.328644913Z" level=info msg="ignoring event" container=ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ab6eec742dda2       91be940803172                                                                                         37 seconds ago       Exited              kube-apiserver            10                  169a6c9879eda       kube-apiserver-ha-828033
	9df3be5b44482       25a1387cdab82                                                                                         About a minute ago   Exited              kube-controller-manager   8                   6e7d4995ae7f4       kube-controller-manager-ha-828033
	0d8fa2694d165       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   bf110b30ae61d       kube-vip-ha-828033
	a3b9aabcf43d5       a52dc94f0a912                                                                                         3 minutes ago        Running             kube-scheduler            2                   7621536a2355b       kube-scheduler-ha-828033
	237edba91c861       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      2                   a7d8f14facc12       etcd-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         7 minutes ago        Exited              kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              25 minutes ago       Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         25 minutes ago       Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         25 minutes ago       Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     25 minutes ago       Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:18:57.909309    4013 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [237edba91c86] <==
	{"level":"info","ts":"2024-05-22T18:15:24.171999Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:15:24.172115Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-22T18:15:24.172313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-22T18:15:24.172381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:15:24.172464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.172492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.175121Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:15:24.17563Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:15:24.175673Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:15:24.175795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:24.175803Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:25.561806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.562871Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:15:25.562879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.562911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.563078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.563101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.564753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:15:25.564849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:15:08.835661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:15:08.835741Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:15:08.835877Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.837589Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83762Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:15:08.837698Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:15:08.8412Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841311Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841321Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:18:57 up  1:01,  0 users,  load average: 0.10, 0.21, 0.35
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ab6eec742dda] <==
	I0522 18:18:20.314736       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:18:20.315573       1 server.go:148] Version: v1.30.1
	I0522 18:18:20.315620       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:18:20.316026       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [9df3be5b4448] <==
	I0522 18:17:30.673877       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:17:31.149959       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:17:31.149983       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:17:31.151310       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:17:31.151319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:17:31.151615       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:17:31.151721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:17:41.152974       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:08.822760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.822810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.835649       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0522 18:15:08.835866       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a3b9aabcf43d] <==
	E0522 18:18:11.642475       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:11.835721       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:11.835780       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:14.623604       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:14.623670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:16.437879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:16.437942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:21.354024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:21.354088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:22.872425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:22.872466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:24.992452       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:24.992522       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:25.605464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:25.605527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:32.504648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:32.504690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:33.956300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:33.956359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:38.236258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:38.236301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:51.929565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:51.929609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:54.447379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:54.447426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kubelet <==
	May 22 18:18:29 ha-828033 kubelet[1391]: E0522 18:18:29.184271    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:31 ha-828033 kubelet[1391]: I0522 18:18:31.546287    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759596    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759605    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:36 ha-828033 kubelet[1391]: W0522 18:18:36.831650    1391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:36 ha-828033 kubelet[1391]: E0522 18:18:36.831737    1391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:37 ha-828033 kubelet[1391]: E0522 18:18:37.277567    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:39 ha-828033 kubelet[1391]: E0522 18:18:39.903642    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:40 ha-828033 kubelet[1391]: I0522 18:18:40.761060    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:41 ha-828033 kubelet[1391]: I0522 18:18:41.183405    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:41 ha-828033 kubelet[1391]: E0522 18:18:41.183871    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:42 ha-828033 kubelet[1391]: I0522 18:18:42.183526    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.183890    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975535    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975547    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:47 ha-828033 kubelet[1391]: E0522 18:18:47.278535    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:49 ha-828033 kubelet[1391]: I0522 18:18:49.976988    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191597    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191602    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191623    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:53 ha-828033 kubelet[1391]: I0522 18:18:53.183075    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:53 ha-828033 kubelet[1391]: E0522 18:18:53.183526    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:55 ha-828033 kubelet[1391]: I0522 18:18:55.182931    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:55 ha-828033 kubelet[1391]: E0522 18:18:55.183355    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:57 ha-828033 kubelet[1391]: E0522 18:18:57.278653    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (251.165857ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:304: expected profile "ha-828033" in json of 'profile list' to include 4 nodes but have 2 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfs
shares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02
\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"Soc
ketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-828033" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-828033\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-828033\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSShares
Root\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.1\",\"ClusterName\":\"ha-828033\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\
"Name\":\"m02\",\"IP\":\"\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.1\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true}],\"Addons\":{\"default-storageclass\":true,\"storage-provisioner\":true},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath
\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect ha-828033
helpers_test.go:235: (dbg) docker inspect ha-828033:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18",
	        "Created": "2024-05-22T17:52:56.610182625Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 102111,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:15:10.483132321Z",
	            "FinishedAt": "2024-05-22T18:15:09.116884079Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hostname",
	        "HostsPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/hosts",
	        "LogPath": "/var/lib/docker/containers/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18-json.log",
	        "Name": "/ha-828033",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-828033:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-828033",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7925abb66c52d23cbe606e639ae5f47f326d21ee961207c6d4d5c88d7fa60a92/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-828033",
	                "Source": "/var/lib/docker/volumes/ha-828033/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-828033",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-828033",
	                "name.minikube.sigs.k8s.io": "ha-828033",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "e612b115b826e3419d82d7b81443bb337ae8736fcd5da15e19129972417863e7",
	            "SandboxKey": "/var/run/docker/netns/e612b115b826",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-828033": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "NetworkID": "638c6f0967c1e35f3bafe98ee33aa3c714dba300cb11f98d6a87d37f54d53d4f",
	                    "EndpointID": "e2ea99d68522c5a32290bcf1c36c6f217acb3d5d61a816c7582d4e1903563b0a",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "ha-828033",
	                        "a436ef1be4f0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p ha-828033 -n ha-828033: exit status 2 (243.522847ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 logs -n 25
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| node    | add -p ha-828033 -v=7                                                            | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test.txt                                               |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt     |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033:/home/docker/cp-test.txt                                  | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt                   |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033 sudo cat                                                               |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033-m02 sudo cat                                          | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033_ha-828033-m02.txt                                 |           |         |         |                     |                     |
	| cp      | ha-828033 cp testdata/cp-test.txt                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt                       |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | ha-828033-m02 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-828033 ssh -n ha-828033 sudo cat                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | /home/docker/cp-test_ha-828033-m02_ha-828033.txt                                 |           |         |         |                     |                     |
	| node    | ha-828033 node stop m02 -v=7                                                     | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC | 22 May 24 18:08 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-828033 node start m02 -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:08 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033 -v=7                                                           | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-828033 -v=7                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC | 22 May 24 18:11 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true -v=7                                                    | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:11 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-828033                                                                | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	| node    | ha-828033 node delete m03 -v=7                                                   | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-828033 stop -v=7                                                              | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC | 22 May 24 18:15 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-828033 --wait=true                                                         | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:15 UTC |                     |
	|         | -v=7 --alsologtostderr                                                           |           |         |         |                     |                     |
	|         | --driver=docker                                                                  |           |         |         |                     |                     |
	|         | --container-runtime=docker                                                       |           |         |         |                     |                     |
	| node    | add -p ha-828033                                                                 | ha-828033 | jenkins | v1.33.1 | 22 May 24 18:18 UTC |                     |
	|         | --control-plane -v=7                                                             |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:15:10
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:15:10.052711  101831 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:15:10.052945  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.052953  101831 out.go:304] Setting ErrFile to fd 2...
	I0522 18:15:10.052957  101831 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:15:10.053112  101831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:15:10.053580  101831 out.go:298] Setting JSON to false
	I0522 18:15:10.054415  101831 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":3454,"bootTime":1716398256,"procs":182,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:15:10.054471  101831 start.go:139] virtualization: kvm guest
	I0522 18:15:10.056675  101831 out.go:177] * [ha-828033] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:15:10.058040  101831 notify.go:220] Checking for updates...
	I0522 18:15:10.058046  101831 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:15:10.059343  101831 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:15:10.060677  101831 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:10.061800  101831 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:15:10.062877  101831 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:15:10.064091  101831 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:15:10.065687  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:10.066119  101831 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:15:10.086670  101831 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:15:10.086771  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.130648  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.122350286 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.130754  101831 docker.go:295] overlay module found
	I0522 18:15:10.132447  101831 out.go:177] * Using the docker driver based on existing profile
	I0522 18:15:10.133511  101831 start.go:297] selected driver: docker
	I0522 18:15:10.133528  101831 start.go:901] validating driver "docker" against &{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p200
0.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.133615  101831 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:15:10.133693  101831 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:15:10.178797  101831 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:0 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:15:10.170730392 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:15:10.179465  101831 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:15:10.179495  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:10.179504  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:10.179557  101831 start.go:340] cluster config:
	{Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:10.181838  101831 out.go:177] * Starting "ha-828033" primary control-plane node in "ha-828033" cluster
	I0522 18:15:10.182862  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:15:10.184066  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:15:10.185142  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:10.185165  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:15:10.185172  101831 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:15:10.185187  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:15:10.185275  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:15:10.185286  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:15:10.185372  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.199839  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:15:10.199866  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:15:10.199888  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:15:10.199920  101831 start.go:360] acquireMachinesLock for ha-828033: {Name:mkbefd324494a1695e1df90bb310edf3046d6e62 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:15:10.199975  101831 start.go:364] duration metric: took 36.63µs to acquireMachinesLock for "ha-828033"
	I0522 18:15:10.199991  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:15:10.200001  101831 fix.go:54] fixHost starting: 
	I0522 18:15:10.200212  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.216528  101831 fix.go:112] recreateIfNeeded on ha-828033: state=Stopped err=<nil>
	W0522 18:15:10.216569  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:15:10.218337  101831 out.go:177] * Restarting existing docker container for "ha-828033" ...
	I0522 18:15:10.219502  101831 cli_runner.go:164] Run: docker start ha-828033
	I0522 18:15:10.489901  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:10.507723  101831 kic.go:430] container "ha-828033" state is running.
	I0522 18:15:10.508126  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:10.527137  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:15:10.527348  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:15:10.527408  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:10.544792  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:10.545081  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:10.545103  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:15:10.545690  101831 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56208->127.0.0.1:32817: read: connection reset by peer
	I0522 18:15:13.662862  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.662903  101831 ubuntu.go:169] provisioning hostname "ha-828033"
	I0522 18:15:13.662964  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.679655  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.679834  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.679848  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033 && echo "ha-828033" | sudo tee /etc/hostname
	I0522 18:15:13.801105  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033
	
	I0522 18:15:13.801184  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:13.817648  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:13.817828  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:13.817845  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:15:13.931153  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:13.931179  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:15:13.931217  101831 ubuntu.go:177] setting up certificates
	I0522 18:15:13.931238  101831 provision.go:84] configureAuth start
	I0522 18:15:13.931311  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:13.947388  101831 provision.go:143] copyHostCerts
	I0522 18:15:13.947420  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947445  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:15:13.947460  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:15:13.947524  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:15:13.947607  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947625  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:15:13.947628  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:15:13.947654  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:15:13.947696  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947711  101831 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:15:13.947717  101831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:15:13.947737  101831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:15:13.947784  101831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.ha-828033 san=[127.0.0.1 192.168.49.2 ha-828033 localhost minikube]
	I0522 18:15:14.398357  101831 provision.go:177] copyRemoteCerts
	I0522 18:15:14.398411  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:15:14.398442  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.414166  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:14.499249  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:15:14.499326  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:15:14.520994  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:15:14.521050  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0522 18:15:14.540775  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:15:14.540816  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0522 18:15:14.560240  101831 provision.go:87] duration metric: took 628.988417ms to configureAuth
	I0522 18:15:14.560262  101831 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:15:14.560422  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:14.560469  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.576177  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.576336  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.576348  101831 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:15:14.687318  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:15:14.687343  101831 ubuntu.go:71] root file system type: overlay
	I0522 18:15:14.687455  101831 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:15:14.687517  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.704102  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.704323  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.704424  101831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:15:14.825449  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:15:14.825531  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.841507  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:15:14.841715  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32817 <nil> <nil>}
	I0522 18:15:14.841741  101831 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:15:14.955461  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:15:14.955484  101831 machine.go:97] duration metric: took 4.428121798s to provisionDockerMachine
	I0522 18:15:14.955497  101831 start.go:293] postStartSetup for "ha-828033" (driver="docker")
	I0522 18:15:14.955511  101831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:15:14.955559  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:15:14.955599  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:14.970693  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.055854  101831 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:15:15.058722  101831 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:15:15.058760  101831 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:15:15.058771  101831 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:15:15.058780  101831 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:15:15.058789  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:15:15.058832  101831 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:15:15.058903  101831 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:15:15.058914  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:15:15.058993  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:15:15.066158  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:15.086000  101831 start.go:296] duration metric: took 130.491ms for postStartSetup
	I0522 18:15:15.086056  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:15:15.086093  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.101977  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.183666  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:15:15.187576  101831 fix.go:56] duration metric: took 4.987575013s for fixHost
	I0522 18:15:15.187597  101831 start.go:83] releasing machines lock for "ha-828033", held for 4.987611005s
	I0522 18:15:15.187662  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:15:15.203730  101831 ssh_runner.go:195] Run: cat /version.json
	I0522 18:15:15.203784  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.203832  101831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:15:15.203905  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:15.219620  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.220317  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:15.298438  101831 ssh_runner.go:195] Run: systemctl --version
	I0522 18:15:15.369455  101831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:15:15.373670  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:15:15.389963  101831 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:15:15.390037  101831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:15:15.397635  101831 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:15:15.397661  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.397689  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.397785  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.411498  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:15:15.419815  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:15:15.428116  101831 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.428162  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:15:15.436218  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.444432  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:15:15.452463  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:15:15.460889  101831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:15:15.468598  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:15:15.476986  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:15:15.485179  101831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:15:15.493301  101831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:15:15.500194  101831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:15:15.506903  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:15.578809  101831 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:15:15.647535  101831 start.go:494] detecting cgroup driver to use...
	I0522 18:15:15.647580  101831 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:15:15.647625  101831 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:15:15.659341  101831 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:15:15.659408  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:15:15.670447  101831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:15:15.687181  101831 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:15:15.690280  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:15:15.698889  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:15:15.716155  101831 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:15:15.849757  101831 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:15:15.927002  101831 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:15:15.927199  101831 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:15:15.958682  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.035955  101831 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:15:16.309267  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:15:16.319069  101831 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:15:16.329406  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.338954  101831 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:15:16.411316  101831 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:15:16.482185  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.558123  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:15:16.569903  101831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:15:16.579592  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:16.654464  101831 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:15:16.713660  101831 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:15:16.713739  101831 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:15:16.717169  101831 start.go:562] Will wait 60s for crictl version
	I0522 18:15:16.717224  101831 ssh_runner.go:195] Run: which crictl
	I0522 18:15:16.720182  101831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:15:16.750802  101831 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:15:16.750855  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.772501  101831 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:15:16.795663  101831 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:15:16.795751  101831 cli_runner.go:164] Run: docker network inspect ha-828033 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:15:16.811580  101831 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0522 18:15:16.814850  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:16.824839  101831 kubeadm.go:877] updating cluster {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:dock
er MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:15:16.824958  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:15:16.825025  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.842616  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.842633  101831 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:15:16.842688  101831 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:15:16.859091  101831 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:15:16.859115  101831 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:15:16.859131  101831 kubeadm.go:928] updating node { 192.168.49.2 8443 v1.30.1 docker true true} ...
	I0522 18:15:16.859251  101831 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-828033 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:15:16.859326  101831 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:15:16.902852  101831 cni.go:84] Creating CNI manager for ""
	I0522 18:15:16.902868  101831 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0522 18:15:16.902882  101831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:15:16.902904  101831 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-828033 NodeName:ha-828033 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:15:16.903073  101831 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-828033"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:15:16.903091  101831 kube-vip.go:115] generating kube-vip config ...
	I0522 18:15:16.903133  101831 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0522 18:15:16.913846  101831 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:16.913951  101831 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0522 18:15:16.914004  101831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:15:16.921502  101831 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:15:16.921564  101831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0522 18:15:16.928993  101831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0522 18:15:16.944153  101831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:15:16.959523  101831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0522 18:15:16.974202  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0522 18:15:16.988963  101831 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0522 18:15:16.991795  101831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:15:17.000800  101831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:15:17.079221  101831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:15:17.090798  101831 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033 for IP: 192.168.49.2
	I0522 18:15:17.090820  101831 certs.go:194] generating shared ca certs ...
	I0522 18:15:17.090844  101831 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.090965  101831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:15:17.091002  101831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:15:17.091008  101831 certs.go:256] generating profile certs ...
	I0522 18:15:17.091078  101831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key
	I0522 18:15:17.091129  101831 certs.go:616] failed to parse cert file /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: x509: cannot parse IP address of length 0
	I0522 18:15:17.091199  101831 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c
	I0522 18:15:17.091213  101831 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 <nil> 192.168.49.254]
	I0522 18:15:17.140524  101831 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c ...
	I0522 18:15:17.140548  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c: {Name:mk66b7b6d58d67549d1f54b14e2dab10ef5ff901 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140659  101831 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c ...
	I0522 18:15:17.140670  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c: {Name:mk363832ce5217f507c2e6dd695f7d4ccbc00d2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.140730  101831 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt
	I0522 18:15:17.140925  101831 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key.762d4a5c -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key
	I0522 18:15:17.141101  101831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key
	I0522 18:15:17.141119  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:15:17.141133  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:15:17.141147  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:15:17.141170  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:15:17.141187  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:15:17.141204  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:15:17.141219  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:15:17.141242  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:15:17.141303  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:15:17.141346  101831 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:15:17.141359  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:15:17.141388  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:15:17.141417  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:15:17.141446  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:15:17.141496  101831 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:15:17.141532  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.141552  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.141573  101831 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.142334  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:15:17.168748  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:15:17.251949  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:15:17.279089  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:15:17.360292  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:15:17.382285  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:15:17.402361  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:15:17.422080  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0522 18:15:17.441696  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:15:17.461724  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:15:17.481252  101831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:15:17.500617  101831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:15:17.515028  101831 ssh_runner.go:195] Run: openssl version
	I0522 18:15:17.519598  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:15:17.527181  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530162  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.530202  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:15:17.535963  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:15:17.543306  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:15:17.551068  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553913  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.553960  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:15:17.559966  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:15:17.567478  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:15:17.575235  101831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578146  101831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.578200  101831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:15:17.584135  101831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:15:17.591800  101831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:15:17.594551  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:15:17.600342  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:15:17.606283  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:15:17.611975  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:15:17.617679  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:15:17.623211  101831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:15:17.628747  101831 kubeadm.go:391] StartCluster: {Name:ha-828033 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:ha-828033 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:15:17.628861  101831 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:15:17.645553  101831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0522 18:15:17.653137  101831 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:15:17.653154  101831 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:15:17.653158  101831 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:15:17.653194  101831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:15:17.660437  101831 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:15:17.660808  101831 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-828033" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.660901  101831 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "ha-828033" cluster setting kubeconfig missing "ha-828033" context setting]
	I0522 18:15:17.661141  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.661490  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.661685  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, Us
erAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.662092  101831 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:15:17.662244  101831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:15:17.669585  101831 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.49.2
	I0522 18:15:17.669601  101831 kubeadm.go:591] duration metric: took 16.438601ms to restartPrimaryControlPlane
	I0522 18:15:17.669608  101831 kubeadm.go:393] duration metric: took 40.865584ms to StartCluster
	I0522 18:15:17.669620  101831 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.669675  101831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.670178  101831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:15:17.670340  101831 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:15:17.670358  101831 start.go:240] waiting for startup goroutines ...
	I0522 18:15:17.670369  101831 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:15:17.670406  101831 addons.go:69] Setting storage-provisioner=true in profile "ha-828033"
	I0522 18:15:17.670424  101831 addons.go:69] Setting default-storageclass=true in profile "ha-828033"
	I0522 18:15:17.670437  101831 addons.go:234] Setting addon storage-provisioner=true in "ha-828033"
	W0522 18:15:17.670444  101831 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:15:17.670452  101831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-828033"
	I0522 18:15:17.670468  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.670519  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:15:17.670698  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.670784  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.689774  101831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:15:17.689555  101831 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:15:17.691107  101831 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.691126  101831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:15:17.691169  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.691305  101831 kapi.go:59] client config for ha-828033: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:15:17.691526  101831 addons.go:234] Setting addon default-storageclass=true in "ha-828033"
	W0522 18:15:17.691538  101831 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:15:17.691559  101831 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:15:17.691847  101831 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:15:17.710078  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.710513  101831 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:17.710529  101831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:15:17.710565  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:15:17.726905  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32817 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:15:17.803514  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:15:17.818704  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:17.855350  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.855404  101831 retry.go:31] will retry after 232.813174ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.850597    1551 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:17.869892  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:17.869918  101831 retry.go:31] will retry after 317.212878ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:17.865773    1562 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.089255  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.139447  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.139480  101831 retry.go:31] will retry after 388.464948ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.134977    1573 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.187648  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:18.237073  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.237097  101831 retry.go:31] will retry after 286.046895ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.232816    1584 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.523727  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:15:18.528673  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:18.578085  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.578120  101831 retry.go:31] will retry after 730.017926ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.572630    1596 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:15:18.580563  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:18.580590  101831 retry.go:31] will retry after 575.328536ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:18.576260    1601 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.156346  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:19.207853  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.207882  101831 retry.go:31] will retry after 904.065015ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.202838    1617 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.309074  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:19.360363  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:19.360398  101831 retry.go:31] will retry after 668.946527ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:19.355687    1628 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.030373  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:20.081266  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.081297  101831 retry.go:31] will retry after 1.581516451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.076681    1639 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.112442  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:20.162392  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.162423  101831 retry.go:31] will retry after 799.963515ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:20.158120    1649 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:20.962767  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:21.014221  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.014258  101831 retry.go:31] will retry after 2.627281568s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.009602    1659 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.663009  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:21.716311  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:21.716340  101831 retry.go:31] will retry after 973.454643ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:21.710573    1670 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.690502  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:22.742767  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:22.742794  101831 retry.go:31] will retry after 3.340789148s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:22.737924    1692 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.641773  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:23.775204  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:23.775240  101831 retry.go:31] will retry after 2.671895107s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:23.764793    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	E0522 18:15:23.771363    1761 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.083777  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:26.134578  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.134608  101831 retry.go:31] will retry after 4.298864045s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.130305    2206 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.448092  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:26.499632  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:26.499662  101831 retry.go:31] will retry after 5.525229223s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:26.494657    2217 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.434210  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:30.485401  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:30.485428  101831 retry.go:31] will retry after 4.916959612s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:30.480051    2239 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.025957  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:32.076991  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:32.077021  101831 retry.go:31] will retry after 7.245842793s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:32.072148    2261 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.402632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:35.454254  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:35.454282  101831 retry.go:31] will retry after 10.414070295s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:35.449172    2290 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.324207  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:39.375910  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:39.375942  101831 retry.go:31] will retry after 9.156494241s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:39.371357    2313 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.868576  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:15:45.920031  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:45.920063  101831 retry.go:31] will retry after 14.404576525s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:45.914874    2405 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.532789  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:15:48.585261  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:15:48.585294  101831 retry.go:31] will retry after 17.974490677s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:15:48.580813    2427 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.325688  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:00.377854  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:00.377897  101831 retry.go:31] will retry after 11.577079387s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:00.372525    2511 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.561241  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:06.612860  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:06.612894  101831 retry.go:31] will retry after 14.583164714s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:06.608114    2620 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:11.956632  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:12.008606  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:12.008639  101831 retry.go:31] will retry after 46.302827634s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:12.003344    2654 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.196878  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:21.247130  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:21.247161  101831 retry.go:31] will retry after 25.952174169s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:21.242880    2685 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:16:47.199672  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0522 18:16:47.251576  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:47.251667  101831 out.go:239] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:47.246652    2834 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.312157  101831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W0522 18:16:58.364469  101831 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:16:58.364578  101831 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:16:58.359900    2936 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0522 18:16:58.366416  101831 out.go:177] * Enabled addons: 
	I0522 18:16:58.367516  101831 addons.go:505] duration metric: took 1m40.697149813s for enable addons: enabled=[]
	I0522 18:16:58.367546  101831 start.go:245] waiting for cluster config update ...
	I0522 18:16:58.367558  101831 start.go:254] writing updated cluster config ...
	I0522 18:16:58.369066  101831 out.go:177] 
	I0522 18:16:58.370289  101831 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:16:58.370344  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.371848  101831 out.go:177] * Starting "ha-828033-m02" control-plane node in "ha-828033" cluster
	I0522 18:16:58.373273  101831 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:16:58.374502  101831 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:16:58.375701  101831 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:16:58.375722  101831 cache.go:56] Caching tarball of preloaded images
	I0522 18:16:58.375727  101831 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:16:58.375816  101831 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:16:58.375840  101831 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:16:58.375916  101831 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/ha-828033/config.json ...
	I0522 18:16:58.392272  101831 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:16:58.392290  101831 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:16:58.392305  101831 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:16:58.392330  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:16:58.392384  101831 start.go:364] duration metric: took 37.403µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:16:58.392400  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:16:58.392405  101831 fix.go:54] fixHost starting: m02
	I0522 18:16:58.392601  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.408748  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Stopped err=<nil>
	W0522 18:16:58.408768  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:16:58.410677  101831 out.go:177] * Restarting existing docker container for "ha-828033-m02" ...
	I0522 18:16:58.411822  101831 cli_runner.go:164] Run: docker start ha-828033-m02
	I0522 18:16:58.662201  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:16:58.678298  101831 kic.go:430] container "ha-828033-m02" state is running.
	I0522 18:16:58.678749  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:16:58.695431  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:16:58.695483  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:16:58.710353  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	W0522 18:16:58.711129  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.711158  101831 retry.go:31] will retry after 162.419442ms: ssh: handshake failed: read tcp 127.0.0.1:37780->127.0.0.1:32822: read: connection reset by peer
	W0522 18:16:58.874922  101831 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:58.874949  101831 retry.go:31] will retry after 374.487623ms: ssh: handshake failed: read tcp 127.0.0.1:37782->127.0.0.1:32822: read: connection reset by peer
	I0522 18:16:59.335651  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:16:59.339485  101831 fix.go:56] duration metric: took 947.0745ms for fixHost
	I0522 18:16:59.339510  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 947.115875ms
	W0522 18:16:59.339525  101831 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:16:59.339587  101831 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:16:59.339604  101831 start.go:728] Will try again in 5 seconds ...
	I0522 18:17:04.343396  101831 start.go:360] acquireMachinesLock for ha-828033-m02: {Name:mk5f7d59268c8438badf0506a539b57f0dca67dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:17:04.343479  101831 start.go:364] duration metric: took 52.078µs to acquireMachinesLock for "ha-828033-m02"
	I0522 18:17:04.343499  101831 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:17:04.343506  101831 fix.go:54] fixHost starting: m02
	I0522 18:17:04.343719  101831 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:17:04.359537  101831 fix.go:112] recreateIfNeeded on ha-828033-m02: state=Running err=<nil>
	W0522 18:17:04.359560  101831 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:17:04.361525  101831 out.go:177] * Updating the running docker "ha-828033-m02" container ...
	I0522 18:17:04.362763  101831 machine.go:94] provisionDockerMachine start ...
	I0522 18:17:04.362823  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.378286  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.378448  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.378458  101831 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:17:04.490382  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.490408  101831 ubuntu.go:169] provisioning hostname "ha-828033-m02"
	I0522 18:17:04.490471  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.506007  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.506177  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.506191  101831 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-828033-m02 && echo "ha-828033-m02" | sudo tee /etc/hostname
	I0522 18:17:04.628978  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-828033-m02
	
	I0522 18:17:04.629058  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:17:04.645189  101831 main.go:141] libmachine: Using SSH client type: native
	I0522 18:17:04.645348  101831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32822 <nil> <nil>}
	I0522 18:17:04.645364  101831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-828033-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-828033-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-828033-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:17:04.759139  101831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:17:04.759186  101831 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:17:04.759214  101831 ubuntu.go:177] setting up certificates
	I0522 18:17:04.759235  101831 provision.go:84] configureAuth start
	I0522 18:17:04.759332  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.775834  101831 provision.go:87] duration metric: took 16.584677ms to configureAuth
	W0522 18:17:04.775854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.775873  101831 retry.go:31] will retry after 126.959µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.777009  101831 provision.go:84] configureAuth start
	I0522 18:17:04.777074  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.793126  101831 provision.go:87] duration metric: took 16.098282ms to configureAuth
	W0522 18:17:04.793147  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.793164  101831 retry.go:31] will retry after 87.815µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.794272  101831 provision.go:84] configureAuth start
	I0522 18:17:04.794339  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.810002  101831 provision.go:87] duration metric: took 15.712157ms to configureAuth
	W0522 18:17:04.810023  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.810043  101831 retry.go:31] will retry after 160.401µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.811149  101831 provision.go:84] configureAuth start
	I0522 18:17:04.811208  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.826479  101831 provision.go:87] duration metric: took 15.314201ms to configureAuth
	W0522 18:17:04.826498  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.826513  101831 retry.go:31] will retry after 419.179µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.827621  101831 provision.go:84] configureAuth start
	I0522 18:17:04.827687  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.842837  101831 provision.go:87] duration metric: took 15.198634ms to configureAuth
	W0522 18:17:04.842854  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.842870  101831 retry.go:31] will retry after 333.49µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.843983  101831 provision.go:84] configureAuth start
	I0522 18:17:04.844056  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.858999  101831 provision.go:87] duration metric: took 15.001015ms to configureAuth
	W0522 18:17:04.859014  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.859029  101831 retry.go:31] will retry after 831.427µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.860145  101831 provision.go:84] configureAuth start
	I0522 18:17:04.860207  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.874679  101831 provision.go:87] duration metric: took 14.517169ms to configureAuth
	W0522 18:17:04.874696  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.874710  101831 retry.go:31] will retry after 1.617455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.876883  101831 provision.go:84] configureAuth start
	I0522 18:17:04.876932  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.891845  101831 provision.go:87] duration metric: took 14.947571ms to configureAuth
	W0522 18:17:04.891860  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.891873  101831 retry.go:31] will retry after 1.45074ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.894054  101831 provision.go:84] configureAuth start
	I0522 18:17:04.894110  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.909207  101831 provision.go:87] duration metric: took 15.132147ms to configureAuth
	W0522 18:17:04.909224  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.909239  101831 retry.go:31] will retry after 2.781453ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.912374  101831 provision.go:84] configureAuth start
	I0522 18:17:04.912425  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.927102  101831 provision.go:87] duration metric: took 14.710332ms to configureAuth
	W0522 18:17:04.927120  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.927135  101831 retry.go:31] will retry after 3.086595ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.930243  101831 provision.go:84] configureAuth start
	I0522 18:17:04.930304  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.944990  101831 provision.go:87] duration metric: took 14.727208ms to configureAuth
	W0522 18:17:04.945005  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.945020  101831 retry.go:31] will retry after 8.052612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.953127  101831 provision.go:84] configureAuth start
	I0522 18:17:04.953199  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.968194  101831 provision.go:87] duration metric: took 15.047376ms to configureAuth
	W0522 18:17:04.968211  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.968235  101831 retry.go:31] will retry after 12.227939ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.981403  101831 provision.go:84] configureAuth start
	I0522 18:17:04.981475  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:04.996918  101831 provision.go:87] duration metric: took 15.4993ms to configureAuth
	W0522 18:17:04.996933  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:04.996947  101831 retry.go:31] will retry after 9.372006ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.007135  101831 provision.go:84] configureAuth start
	I0522 18:17:05.007251  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.021722  101831 provision.go:87] duration metric: took 14.570245ms to configureAuth
	W0522 18:17:05.021738  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.021751  101831 retry.go:31] will retry after 23.298276ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.045949  101831 provision.go:84] configureAuth start
	I0522 18:17:05.046030  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.062577  101831 provision.go:87] duration metric: took 16.607282ms to configureAuth
	W0522 18:17:05.062597  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.062613  101831 retry.go:31] will retry after 40.757138ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.103799  101831 provision.go:84] configureAuth start
	I0522 18:17:05.103887  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.119482  101831 provision.go:87] duration metric: took 15.655062ms to configureAuth
	W0522 18:17:05.119499  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.119516  101831 retry.go:31] will retry after 38.095973ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.158702  101831 provision.go:84] configureAuth start
	I0522 18:17:05.158788  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.174198  101831 provision.go:87] duration metric: took 15.463621ms to configureAuth
	W0522 18:17:05.174214  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.174232  101831 retry.go:31] will retry after 48.82201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.223426  101831 provision.go:84] configureAuth start
	I0522 18:17:05.223513  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.239564  101831 provision.go:87] duration metric: took 16.11307ms to configureAuth
	W0522 18:17:05.239581  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.239597  101831 retry.go:31] will retry after 136.469602ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.376897  101831 provision.go:84] configureAuth start
	I0522 18:17:05.377009  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.393537  101831 provision.go:87] duration metric: took 16.613386ms to configureAuth
	W0522 18:17:05.393558  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.393575  101831 retry.go:31] will retry after 161.82385ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.555925  101831 provision.go:84] configureAuth start
	I0522 18:17:05.556033  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.572787  101831 provision.go:87] duration metric: took 16.830217ms to configureAuth
	W0522 18:17:05.572804  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.572824  101831 retry.go:31] will retry after 213.087725ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.786136  101831 provision.go:84] configureAuth start
	I0522 18:17:05.786249  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:05.801903  101831 provision.go:87] duration metric: took 15.735371ms to configureAuth
	W0522 18:17:05.801919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:05.801935  101831 retry.go:31] will retry after 367.249953ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.169404  101831 provision.go:84] configureAuth start
	I0522 18:17:06.169504  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.186269  101831 provision.go:87] duration metric: took 16.837758ms to configureAuth
	W0522 18:17:06.186288  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.186306  101831 retry.go:31] will retry after 668.860958ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.856116  101831 provision.go:84] configureAuth start
	I0522 18:17:06.856211  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:06.872110  101831 provision.go:87] duration metric: took 15.968481ms to configureAuth
	W0522 18:17:06.872130  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:06.872145  101831 retry.go:31] will retry after 1.080057807s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.952333  101831 provision.go:84] configureAuth start
	I0522 18:17:07.952446  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:07.969099  101831 provision.go:87] duration metric: took 16.737681ms to configureAuth
	W0522 18:17:07.969119  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:07.969136  101831 retry.go:31] will retry after 1.35549681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.325582  101831 provision.go:84] configureAuth start
	I0522 18:17:09.325692  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:09.341763  101831 provision.go:87] duration metric: took 16.155925ms to configureAuth
	W0522 18:17:09.341780  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:09.341798  101831 retry.go:31] will retry after 1.897886244s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.240016  101831 provision.go:84] configureAuth start
	I0522 18:17:11.240140  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:11.257072  101831 provision.go:87] duration metric: took 17.02632ms to configureAuth
	W0522 18:17:11.257092  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:11.257114  101831 retry.go:31] will retry after 2.810888271s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.070011  101831 provision.go:84] configureAuth start
	I0522 18:17:14.070113  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:14.085901  101831 provision.go:87] duration metric: took 15.848159ms to configureAuth
	W0522 18:17:14.085919  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:14.085935  101831 retry.go:31] will retry after 4.662344732s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.748720  101831 provision.go:84] configureAuth start
	I0522 18:17:18.748845  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:18.765467  101831 provision.go:87] duration metric: took 16.701835ms to configureAuth
	W0522 18:17:18.765486  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:18.765504  101831 retry.go:31] will retry after 3.216983163s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:21.983872  101831 provision.go:84] configureAuth start
	I0522 18:17:21.983984  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:22.000235  101831 provision.go:87] duration metric: took 16.33158ms to configureAuth
	W0522 18:17:22.000253  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:22.000269  101831 retry.go:31] will retry after 5.251668241s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.253805  101831 provision.go:84] configureAuth start
	I0522 18:17:27.253896  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:27.270555  101831 provision.go:87] duration metric: took 16.716068ms to configureAuth
	W0522 18:17:27.270575  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:27.270593  101831 retry.go:31] will retry after 7.113433713s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.388102  101831 provision.go:84] configureAuth start
	I0522 18:17:34.388187  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:34.404845  101831 provision.go:87] duration metric: took 16.712516ms to configureAuth
	W0522 18:17:34.404862  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:34.404878  101831 retry.go:31] will retry after 14.943192814s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.349248  101831 provision.go:84] configureAuth start
	I0522 18:17:49.349327  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:17:49.365985  101831 provision.go:87] duration metric: took 16.710371ms to configureAuth
	W0522 18:17:49.366002  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:17:49.366018  101831 retry.go:31] will retry after 20.509395565s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.875559  101831 provision.go:84] configureAuth start
	I0522 18:18:09.875637  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:09.892771  101831 provision.go:87] duration metric: took 17.18443ms to configureAuth
	W0522 18:18:09.892792  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:09.892808  101831 retry.go:31] will retry after 43.941504091s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.837442  101831 provision.go:84] configureAuth start
	I0522 18:18:53.837525  101831 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	I0522 18:18:53.854156  101831 provision.go:87] duration metric: took 16.677406ms to configureAuth
	W0522 18:18:53.854181  101831 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854199  101831 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.854204  101831 machine.go:97] duration metric: took 1m49.491432011s to provisionDockerMachine
	I0522 18:18:53.854270  101831 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:18:53.854308  101831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033-m02
	I0522 18:18:53.869467  101831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32822 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033-m02/id_rsa Username:docker}
	I0522 18:18:53.955836  101831 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:18:53.959906  101831 fix.go:56] duration metric: took 1m49.616394756s for fixHost
	I0522 18:18:53.959927  101831 start.go:83] releasing machines lock for "ha-828033-m02", held for 1m49.61643748s
	W0522 18:18:53.960003  101831 out.go:239] * Failed to start docker container. Running "minikube delete -p ha-828033" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:18:53.962122  101831 out.go:177] 
	W0522 18:18:53.963599  101831 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:18:53.963614  101831 out.go:239] * 
	W0522 18:18:53.964392  101831 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:18:53.965343  101831 out.go:177] 
	
	
	==> Docker <==
	May 22 18:18:54 ha-828033 dockerd[952]: 2024/05/22 18:18:54 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:55 ha-828033 dockerd[952]: 2024/05/22 18:18:55 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:56 ha-828033 dockerd[952]: 2024/05/22 18:18:56 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:57 ha-828033 dockerd[952]: 2024/05/22 18:18:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:57 ha-828033 dockerd[952]: 2024/05/22 18:18:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:57 ha-828033 dockerd[952]: 2024/05/22 18:18:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:57 ha-828033 dockerd[952]: 2024/05/22 18:18:57 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:18:58 ha-828033 dockerd[952]: 2024/05/22 18:18:58 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ab6eec742dda2       91be940803172                                                                                         39 seconds ago       Exited              kube-apiserver            10                  169a6c9879eda       kube-apiserver-ha-828033
	9df3be5b44482       25a1387cdab82                                                                                         About a minute ago   Exited              kube-controller-manager   8                   6e7d4995ae7f4       kube-controller-manager-ha-828033
	0d8fa2694d165       38af8ddebf499                                                                                         3 minutes ago        Running             kube-vip                  1                   bf110b30ae61d       kube-vip-ha-828033
	a3b9aabcf43d5       a52dc94f0a912                                                                                         3 minutes ago        Running             kube-scheduler            2                   7621536a2355b       kube-scheduler-ha-828033
	237edba91c861       3861cfcd7c04c                                                                                         3 minutes ago        Running             etcd                      2                   a7d8f14facc12       etcd-ha-828033
	533c1df8e6e48       a52dc94f0a912                                                                                         7 minutes ago        Exited              kube-scheduler            1                   3f8fe727d5f2c       kube-scheduler-ha-828033
	d884e203b30c3       38af8ddebf499                                                                                         7 minutes ago        Exited              kube-vip                  0                   a52b9affd7ecf       kube-vip-ha-828033
	5e54bd5002a08       3861cfcd7c04c                                                                                         7 minutes ago        Exited              etcd                      1                   62b9b95d560d3       etcd-ha-828033
	8a3f2325c99bb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago       Exited              busybox                   0                   bc32c92f2fa04       busybox-fc5497c4f-nhhq2
	3d03dbb9a9ab6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   cddac885b8c2a       coredns-7db6d8ff4d-dxfhb
	f7fd69b1c56b6       cbb01a7bd410d                                                                                         25 minutes ago       Exited              coredns                   1                   921c71ab51b29       coredns-7db6d8ff4d-gznzs
	f6aa98f9307fc       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              25 minutes ago       Exited              kindnet-cni               0                   7caff96cd793b       kindnet-swzdx
	4aff7c101c8df       6e38f40d628db                                                                                         25 minutes ago       Exited              storage-provisioner       0                   715e7f0294d0a       storage-provisioner
	faac4370a3326       747097150317f                                                                                         25 minutes ago       Exited              kube-proxy                0                   7920c4e023081       kube-proxy-fl69s
	a9f9b4a4a64a7       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     25 minutes ago       Exited              kube-vip                  0                   1648bcaea393a       kube-vip-ha-828033
	
	
	==> coredns [3d03dbb9a9ab] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50131 - 3587 "HINFO IN 4470179154878961707.8016045803374425342. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008706143s
	[INFO] 10.244.0.4:49327 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000241885s
	[INFO] 10.244.0.4:40802 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.006121515s
	[INFO] 10.244.0.4:42584 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.007769308s
	[INFO] 10.244.0.4:47105 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.013499362s
	[INFO] 10.244.0.4:44479 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128455s
	[INFO] 10.244.0.4:39317 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000207648s
	[INFO] 10.244.0.4:57178 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126815s
	[INFO] 10.244.0.4:34145 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128007s
	[INFO] 10.244.0.4:43818 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111992s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f7fd69b1c56b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:56012 - 52073 "HINFO IN 3567271529342956343.6809638899562307915. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013755393s
	[INFO] 10.244.0.4:42570 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000229587s
	[INFO] 10.244.0.4:33806 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005994434s
	[INFO] 10.244.0.4:35583 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143346s
	[INFO] 10.244.0.4:35186 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.009006854s
	[INFO] 10.244.0.4:52002 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014061s
	[INFO] 10.244.0.4:46177 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119698s
	[INFO] 10.244.0.4:36226 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00010004s
	[INFO] 10.244.0.4:43527 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000122855s
	[INFO] 10.244.0.4:40163 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000082725s
	[INFO] 10.244.0.4:45802 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189459s
	[INFO] 10.244.0.4:53228 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000101006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E0522 18:18:59.542783    4272 cert_rotation.go:168] key failed with : x509: cannot parse IP address of length 0
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.007955] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008770] FS-Cache: N-key=[8] '0490130200000000'
	[  +0.008419] FS-Cache: Duplicate cookie detected
	[  +0.005258] FS-Cache: O-cookie c=00000005 [p=00000003 fl=226 nc=0 na=1]
	[  +0.008111] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=00000000f8bfbf2a
	[  +0.008735] FS-Cache: O-key=[8] '0490130200000000'
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	
	
	==> etcd [237edba91c86] <==
	{"level":"info","ts":"2024-05-22T18:15:24.171999Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:15:24.172115Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-05-22T18:15:24.172313Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-05-22T18:15:24.172381Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:15:24.172464Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.172492Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:15:24.175121Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:15:24.17563Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:15:24.175673Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:15:24.175795Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:24.175803Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:25.561806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:15:25.561918Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561927Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.561953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-05-22T18:15:25.562871Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:15:25.562879Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.562911Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:15:25.563078Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.563101Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:15:25.564753Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:15:25.564849Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [5e54bd5002a0] <==
	{"level":"info","ts":"2024-05-22T18:11:54.964206Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:11:56.250663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250725Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-05-22T18:11:56.250791Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250801Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.250829Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-05-22T18:11:56.251833Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:ha-828033 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:11:56.251888Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251966Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:11:56.251993Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.252046Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:11:56.253903Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-05-22T18:11:56.253943Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:15:08.835661Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:15:08.835741Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:15:08.835877Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83592Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.837589Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:15:08.83762Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:15:08.837698Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-05-22T18:15:08.8412Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841311Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-05-22T18:15:08.841321Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-828033","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 18:18:59 up  1:01,  0 users,  load average: 0.10, 0.21, 0.35
	Linux ha-828033 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [f6aa98f9307f] <==
	I0522 18:09:22.215180       1 main.go:227] handling current node
	I0522 18:09:32.218376       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:32.218398       1 main.go:227] handling current node
	I0522 18:09:42.225708       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:42.225729       1 main.go:227] handling current node
	I0522 18:09:52.229501       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:09:52.229521       1 main.go:227] handling current node
	I0522 18:10:02.241125       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:02.241147       1 main.go:227] handling current node
	I0522 18:10:12.244660       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:12.244685       1 main.go:227] handling current node
	I0522 18:10:22.248202       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:22.248227       1 main.go:227] handling current node
	I0522 18:10:32.251732       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:32.251756       1 main.go:227] handling current node
	I0522 18:10:42.255311       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:42.255337       1 main.go:227] handling current node
	I0522 18:10:52.259509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:10:52.259531       1 main.go:227] handling current node
	I0522 18:11:02.270967       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:02.270991       1 main.go:227] handling current node
	I0522 18:11:12.274509       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:12.274532       1 main.go:227] handling current node
	I0522 18:11:22.286083       1 main.go:223] Handling node with IPs: map[192.168.49.2:{}]
	I0522 18:11:22.286107       1 main.go:227] handling current node
	
	
	==> kube-apiserver [ab6eec742dda] <==
	I0522 18:18:20.314736       1 options.go:221] external host was not specified, using 192.168.49.2
	I0522 18:18:20.315573       1 server.go:148] Version: v1.30.1
	I0522 18:18:20.315620       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E0522 18:18:20.316026       1 run.go:74] "command failed" err="x509: cannot parse IP address of length 0"
	
	
	==> kube-controller-manager [9df3be5b4448] <==
	I0522 18:17:30.673877       1 serving.go:380] Generated self-signed cert in-memory
	I0522 18:17:31.149959       1 controllermanager.go:189] "Starting" version="v1.30.1"
	I0522 18:17:31.149983       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:17:31.151310       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0522 18:17:31.151319       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0522 18:17:31.151615       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0522 18:17:31.151721       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	E0522 18:17:41.152974       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8443/healthz\": dial tcp 192.168.49.2:8443: connect: connection refused"
	
	
	==> kube-proxy [faac4370a332] <==
	I0522 17:53:25.952564       1 server_linux.go:69] "Using iptables proxy"
	I0522 17:53:25.969504       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	I0522 17:53:25.993839       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 17:53:25.993892       1 server_linux.go:165] "Using iptables Proxier"
	I0522 17:53:25.996582       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 17:53:25.996608       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 17:53:25.996633       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 17:53:25.996844       1 server.go:872] "Version info" version="v1.30.1"
	I0522 17:53:25.996866       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 17:53:26.043989       1 config.go:192] "Starting service config controller"
	I0522 17:53:26.044016       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 17:53:26.044053       1 config.go:101] "Starting endpoint slice config controller"
	I0522 17:53:26.044059       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 17:53:26.044235       1 config.go:319] "Starting node config controller"
	I0522 17:53:26.044257       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 17:53:26.144579       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 17:53:26.144617       1 shared_informer.go:320] Caches are synced for service config
	I0522 17:53:26.144751       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [533c1df8e6e4] <==
	E0522 18:14:38.214180       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:38.666046       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:38.666106       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:46.971856       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:46.971918       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.49.2:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:49.986221       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:49.986269       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:51.164192       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:51.164258       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:53.155290       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:53.155333       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:14:57.308357       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:14:57.308427       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:00.775132       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:00.775178       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.142808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.142853       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.49.2:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:02.389919       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:02.389963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:03.819888       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:03.819951       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:15:08.822760       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.822810       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:15:08.835649       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	E0522 18:15:08.835866       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a3b9aabcf43d] <==
	E0522 18:18:14.623670       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:16.437879       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:16.437942       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.49.2:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:21.354024       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:21.354088       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.49.2:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:22.872425       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:22.872466       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:24.992452       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:24.992522       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.49.2:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:25.605464       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:25.605527       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.49.2:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:32.504648       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:32.504690       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:33.956300       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:33.956359       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.49.2:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:38.236258       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:38.236301       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.49.2:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:51.929565       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:51.929609       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.49.2:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:54.447379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:54.447426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.49.2:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:58.573833       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:58.573890       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.49.2:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	W0522 18:18:59.414614       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	E0522 18:18:59.414656       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.49.2:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.49.2:8443: connect: connection refused
	
	
	==> kubelet <==
	May 22 18:18:31 ha-828033 kubelet[1391]: I0522 18:18:31.546287    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759596    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:33 ha-828033 kubelet[1391]: E0522 18:18:33.759605    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:36 ha-828033 kubelet[1391]: W0522 18:18:36.831650    1391 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:36 ha-828033 kubelet[1391]: E0522 18:18:36.831737    1391 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.49.254:8443: connect: no route to host
	May 22 18:18:37 ha-828033 kubelet[1391]: E0522 18:18:37.277567    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:39 ha-828033 kubelet[1391]: E0522 18:18:39.903642    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:40 ha-828033 kubelet[1391]: I0522 18:18:40.761060    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:41 ha-828033 kubelet[1391]: I0522 18:18:41.183405    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:41 ha-828033 kubelet[1391]: E0522 18:18:41.183871    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:42 ha-828033 kubelet[1391]: I0522 18:18:42.183526    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.183890    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975535    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:42 ha-828033 kubelet[1391]: E0522 18:18:42.975547    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:47 ha-828033 kubelet[1391]: E0522 18:18:47.278535    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:49 ha-828033 kubelet[1391]: I0522 18:18:49.976988    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191597    1391 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.49.254:8443: connect: no route to host" node="ha-828033"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191602    1391 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-828033?timeout=10s\": dial tcp 192.168.49.254:8443: connect: no route to host" interval="7s"
	May 22 18:18:52 ha-828033 kubelet[1391]: E0522 18:18:52.191623    1391 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 192.168.49.254:8443: connect: no route to host" event="&Event{ObjectMeta:{ha-828033.17d1e244982d5e69  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ha-828033,UID:ha-828033,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ha-828033 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ha-828033,},FirstTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,LastTimestamp:2024-05-22 18:15:17.243633257 +0000 UTC m=+0.150356311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ha-828033,}"
	May 22 18:18:53 ha-828033 kubelet[1391]: I0522 18:18:53.183075    1391 scope.go:117] "RemoveContainer" containerID="ab6eec742dda2398ada52b3175a331d7a225e60bacf7e55ba60f5ca252d3597c"
	May 22 18:18:53 ha-828033 kubelet[1391]: E0522 18:18:53.183526    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-apiserver pod=kube-apiserver-ha-828033_kube-system(54b3b26e16a7ecb9b17fbc5a589bfe7d)\"" pod="kube-system/kube-apiserver-ha-828033" podUID="54b3b26e16a7ecb9b17fbc5a589bfe7d"
	May 22 18:18:55 ha-828033 kubelet[1391]: I0522 18:18:55.182931    1391 scope.go:117] "RemoveContainer" containerID="9df3be5b444827fe03c3f2ad6b786bf7f6bd1307d5bc7980377128329689e265"
	May 22 18:18:55 ha-828033 kubelet[1391]: E0522 18:18:55.183355    1391 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=kube-controller-manager-ha-828033_kube-system(c47e7893b590cf4463db3d58cbcdb223)\"" pod="kube-system/kube-controller-manager-ha-828033" podUID="c47e7893b590cf4463db3d58cbcdb223"
	May 22 18:18:57 ha-828033 kubelet[1391]: E0522 18:18:57.278653    1391 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ha-828033\" not found"
	May 22 18:18:59 ha-828033 kubelet[1391]: I0522 18:18:59.193403    1391 kubelet_node_status.go:73] "Attempting to register node" node="ha-828033"
	
	
	==> storage-provisioner [4aff7c101c8d] <==
	I0522 17:53:26.489878       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 17:53:26.504063       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 17:53:26.504102       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 17:53:26.512252       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 17:53:26.512472       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	I0522 17:53:26.512684       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"895ffb52-c06f-40dc-b6c2-d8db833fa097", APIVersion:"v1", ResourceVersion:"400", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc became leader
	I0522 17:53:26.612677       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-828033_8153ed51-b201-463a-8328-a1e0474c3dbc!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-828033 -n ha-828033: exit status 2 (243.150241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "ha-828033" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.62s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (248.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-737786 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0522 18:32:24.838964   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
multinode_test.go:96: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-737786 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: exit status 80 (4m6.302468814s)

                                                
                                                
-- stdout --
	* [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Stopping node "multinode-737786-m02"  ...
	* Powering off "multinode-737786-m02" via SSH ...
	* Deleting "multinode-737786-m02" in docker ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%
	I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%
	I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%
	I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	* 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:98: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-737786 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/FreshStart2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/FreshStart2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|-------------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |         Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|-------------------------|---------|---------|---------------------|---------------------|
	| start   | -p docker-network-516201       | docker-network-516201   | jenkins | v1.33.1 | 22 May 24 18:21 UTC | 22 May 24 18:25 UTC |
	|         | --network=bridge               |                         |         |         |                     |                     |
	| delete  | -p docker-network-516201       | docker-network-516201   | jenkins | v1.33.1 | 22 May 24 18:25 UTC | 22 May 24 18:25 UTC |
	| start   | -p existing-network-769777     | existing-network-769777 | jenkins | v1.33.1 | 22 May 24 18:25 UTC | 22 May 24 18:30 UTC |
	|         | --network=existing-network     |                         |         |         |                     |                     |
	| delete  | -p existing-network-769777     | existing-network-769777 | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	| start   | -p custom-subnet-000956        | custom-subnet-000956    | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	|         | --subnet=192.168.60.0/24       |                         |         |         |                     |                     |
	| delete  | -p custom-subnet-000956        | custom-subnet-000956    | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	| start   | -p static-ip-885448            | static-ip-885448        | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	|         | --static-ip=192.168.200.200    |                         |         |         |                     |                     |
	| ip      | static-ip-885448 ip            | static-ip-885448        | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	| delete  | -p static-ip-885448            | static-ip-885448        | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:30 UTC |
	| start   | -p first-515789                | first-515789            | jenkins | v1.33.1 | 22 May 24 18:30 UTC | 22 May 24 18:31 UTC |
	|         | --driver=docker                |                         |         |         |                     |                     |
	|         | --container-runtime=docker     |                         |         |         |                     |                     |
	| start   | -p second-518399               | second-518399           | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:31 UTC |
	|         | --driver=docker                |                         |         |         |                     |                     |
	|         | --container-runtime=docker     |                         |         |         |                     |                     |
	| delete  | -p second-518399               | second-518399           | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:31 UTC |
	| delete  | -p first-515789                | first-515789            | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:31 UTC |
	| start   | -p mount-start-1-736299        | mount-start-1-736299    | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:31 UTC |
	|         | --memory=2048 --mount          |                         |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize    |                         |         |         |                     |                     |
	|         | 6543 --mount-port 46464        |                         |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes  |                         |         |         |                     |                     |
	|         | --driver=docker                |                         |         |         |                     |                     |
	|         | --container-runtime=docker     |                         |         |         |                     |                     |
	| ssh     | mount-start-1-736299 ssh -- ls | mount-start-1-736299    | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:31 UTC |
	|         | /minikube-host                 |                         |         |         |                     |                     |
	| start   | -p mount-start-2-747898        | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:31 UTC | 22 May 24 18:32 UTC |
	|         | --memory=2048 --mount          |                         |         |         |                     |                     |
	|         | --mount-gid 0 --mount-msize    |                         |         |         |                     |                     |
	|         | 6543 --mount-port 46465        |                         |         |         |                     |                     |
	|         | --mount-uid 0 --no-kubernetes  |                         |         |         |                     |                     |
	|         | --driver=docker                |                         |         |         |                     |                     |
	|         | --container-runtime=docker     |                         |         |         |                     |                     |
	| ssh     | mount-start-2-747898 ssh -- ls | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	|         | /minikube-host                 |                         |         |         |                     |                     |
	| delete  | -p mount-start-1-736299        | mount-start-1-736299    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	|         | --alsologtostderr -v=5         |                         |         |         |                     |                     |
	| ssh     | mount-start-2-747898 ssh -- ls | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	|         | /minikube-host                 |                         |         |         |                     |                     |
	| stop    | -p mount-start-2-747898        | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| start   | -p mount-start-2-747898        | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| ssh     | mount-start-2-747898 ssh -- ls | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	|         | /minikube-host                 |                         |         |         |                     |                     |
	| delete  | -p mount-start-2-747898        | mount-start-2-747898    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| delete  | -p mount-start-1-736299        | mount-start-1-736299    | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| start   | -p multinode-737786            | multinode-737786        | jenkins | v1.33.1 | 22 May 24 18:32 UTC |                     |
	|         | --wait=true --memory=2200      |                         |         |         |                     |                     |
	|         | --nodes=2 -v=8                 |                         |         |         |                     |                     |
	|         | --alsologtostderr              |                         |         |         |                     |                     |
	|         | --driver=docker                |                         |         |         |                     |                     |
	|         | --container-runtime=docker     |                         |         |         |                     |                     |
	|---------|--------------------------------|-------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:32:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:27Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 22 18:32:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:27Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 22 18:32:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:27Z" level=info msg="Start cri-dockerd grpc backend"
	May 22 18:32:27 multinode-737786 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 22 18:32:32 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/65627abb3612282d6558ffb1aafad214a42aaed131116b1b8f31f678c74ef0f4/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:32 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f2b347dd216a58bc9c88f683631484d66c1337fda1386d98d45876825741536/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:32 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d92837fd4e76b3940b513386b4537e60ec327f94a8fd3e6a1239115d2266fdf/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:32 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/df5064710014068ec6e2be583b4634e08f642ea3e283ac01c4442141654e1ed8/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6eb49817ae60f74a05013589ee02e34a74389cab79c6039ddd296ff87e1db429/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/aa62dfdeffd066b83288a1a332ca7aa23f0d46d29573f332ad1d1d82281f438d/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/27a641da2a0926615e5fbbc9a970d575a8053259aa3e760938650e11374b631c/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:55 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:55Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
	May 22 18:32:58 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.198952588Z" level=info msg="ignoring event" container=ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.201580059Z" level=info msg="ignoring event" container=b73d925361c0506c710632a45f5377f1a6bdeaf15f268313a07afd0bac2a2011 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284487223Z" level=info msg="ignoring event" container=6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284636073Z" level=info msg="ignoring event" container=d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:33:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ada6e7b25c53306480ec3268f02ae3c0a31843cb50792174aefef87684d072cd/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                      CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	14ca8a91c3a85       cbb01a7bd410d                                                                              3 minutes ago       Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8   3 minutes ago       Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                              3 minutes ago       Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                              3 minutes ago       Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                              3 minutes ago       Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                              3 minutes ago       Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                              3 minutes ago       Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                              3 minutes ago       Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                              3 minutes ago       Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:36:23 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:33:08 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:33:08 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:33:08 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:33:08 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     3m35s
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         3m50s
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      3m35s
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m35s
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m33s  kube-proxy       
	  Normal  Starting                 3m49s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s  kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s  kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s  kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m35s  node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.072735Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:32:33.073806Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:32:33.07401Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:32:33.074057Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:32:33.074703Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:32:33.074735Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:32:33.364321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	
	
	==> kernel <==
	 18:36:26 up  1:18,  0 users,  load average: 0.17, 0.48, 0.48
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:34:26.109987       1 main.go:227] handling current node
	I0522 18:34:36.113527       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:34:36.113553       1 main.go:227] handling current node
	I0522 18:34:46.125348       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:34:46.125371       1 main.go:227] handling current node
	I0522 18:34:56.128210       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:34:56.128239       1 main.go:227] handling current node
	I0522 18:35:06.132790       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:06.132813       1 main.go:227] handling current node
	I0522 18:35:16.136347       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:16.136371       1 main.go:227] handling current node
	I0522 18:35:26.146115       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:26.146144       1 main.go:227] handling current node
	I0522 18:35:36.149423       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:36.149446       1 main.go:227] handling current node
	I0522 18:35:46.157823       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:46.157846       1 main.go:227] handling current node
	I0522 18:35:56.161182       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:35:56.161204       1 main.go:227] handling current node
	I0522 18:36:06.173056       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:36:06.173079       1 main.go:227] handling current node
	I0522 18:36:16.176394       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:36:16.176417       1 main.go:227] handling current node
	I0522 18:36:26.186047       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:36:26.186076       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	I0522 18:32:35.368557       1 autoregister_controller.go:141] Starting autoregister controller
	I0522 18:32:35.368562       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0522 18:32:35.368568       1 cache.go:39] Caches are synced for autoregister controller
	E0522 18:32:35.444397       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0522 18:32:35.445464       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.410702       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0522 18:32:51.410782       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-737786"
	I0522 18:32:51.410826       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0522 18:32:51.457732       1 shared_informer.go:320] Caches are synced for disruption
	I0522 18:32:51.476661       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:32:51.501983       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:51 multinode-737786 kubelet[2370]: I0522 18:32:51.844340    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0123bfa5-2086-4863-9436-8a0b88e1d95a-config-volume\") pod \"coredns-7db6d8ff4d-jhsz9\" (UID: \"0123bfa5-2086-4863-9436-8a0b88e1d95a\") " pod="kube-system/coredns-7db6d8ff4d-jhsz9"
	May 22 18:32:52 multinode-737786 kubelet[2370]: I0522 18:32:52.860239    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:32:52 multinode-737786 kubelet[2370]: I0522 18:32:52.957632    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.074988    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqtgj" podStartSLOduration=2.0749678559999998 podStartE2EDuration="2.074967856s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:52.883037531 +0000 UTC m=+15.309607367" watchObservedRunningTime="2024-05-22 18:32:53.074967856 +0000 UTC m=+15.501537687"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.075356    2370 topology_manager.go:215] "Topology Admit Handler" podUID="5d953629-c86b-47be-84da-baa3bdf24d2e" podNamespace="kube-system" podName="storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252849    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2q6h\" (UniqueName: \"kubernetes.io/projected/5d953629-c86b-47be-84da-baa3bdf24d2e-kube-api-access-g2q6h\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/FreshStart2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/FreshStart2Nodes (248.19s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (706.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- rollout status deployment/busybox
E0522 18:36:55.309822   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:37:24.838509   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 18:41:55.310593   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:42:07.892156   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 18:42:24.839095   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
multinode_test.go:498: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-737786 -- rollout status deployment/busybox: exit status 1 (10m3.89837313s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 2 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 2 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
multinode_test.go:500: failed to deploy busybox to multinode cluster
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0522 18:46:38.357060   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0522 18:46:55.310065   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
E0522 18:47:24.838560   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:524: failed to resolve pod IPs: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-7zbr8 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.io: exit status 1 (106.199318ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cq58n does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:538: Pod busybox-fc5497c4f-cq58n could not resolve 'kubernetes.io': exit status 1
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-7zbr8 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.default: exit status 1 (104.950492ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cq58n does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:548: Pod busybox-fc5497c4f-cq58n could not resolve 'kubernetes.default': exit status 1
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-7zbr8 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (105.181095ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cq58n does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:556: Pod busybox-fc5497c4f-cq58n could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeployApp2Nodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/DeployApp2Nodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| stop    | -p mount-start-2-747898                           | mount-start-2-747898 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| start   | -p mount-start-2-747898                           | mount-start-2-747898 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| ssh     | mount-start-2-747898 ssh -- ls                    | mount-start-2-747898 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-747898                           | mount-start-2-747898 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| delete  | -p mount-start-1-736299                           | mount-start-1-736299 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| start   | -p multinode-737786                               | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:32 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- apply -f                   | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:36 UTC | 22 May 24 18:36 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- rollout                    | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:36 UTC |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:52 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:52Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/27a641da2a0926615e5fbbc9a970d575a8053259aa3e760938650e11374b631c/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-fhhmr_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:53 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:53Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	May 22 18:32:55 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:55Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20240513-cd2ac642: Status: Downloaded newer image for kindest/kindnetd:v20240513-cd2ac642"
	May 22 18:32:58 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:32:58Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.198952588Z" level=info msg="ignoring event" container=ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.201580059Z" level=info msg="ignoring event" container=b73d925361c0506c710632a45f5377f1a6bdeaf15f268313a07afd0bac2a2011 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284487223Z" level=info msg="ignoring event" container=6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284636073Z" level=info msg="ignoring event" container=d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:33:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ada6e7b25c53306480ec3268f02ae3c0a31843cb50792174aefef87684d072cd/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fefb8ab9046a93fa90099406fe22d3ab5b99d1e81ed91b35c2e7790f7cd2c3c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:36:29 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:29Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:48:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m   node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.364321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	
	
	==> kernel <==
	 18:48:12 up  1:30,  0 users,  load average: 0.50, 0.20, 0.28
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:46:06.593950       1 main.go:227] handling current node
	I0522 18:46:16.597240       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:16.597262       1 main.go:227] handling current node
	I0522 18:46:26.600369       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:26.600394       1 main.go:227] handling current node
	I0522 18:46:36.603602       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:36.603625       1 main.go:227] handling current node
	I0522 18:46:46.614028       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:46.614050       1 main.go:227] handling current node
	I0522 18:46:56.617533       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:56.617557       1 main.go:227] handling current node
	I0522 18:47:06.626038       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:06.626059       1 main.go:227] handling current node
	I0522 18:47:16.629267       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:16.629291       1 main.go:227] handling current node
	I0522 18:47:26.641682       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:26.641705       1 main.go:227] handling current node
	I0522 18:47:36.644822       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:36.644845       1 main.go:227] handling current node
	I0522 18:47:46.656212       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:46.656241       1 main.go:227] handling current node
	I0522 18:47:56.660170       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:56.660193       1 main.go:227] handling current node
	I0522 18:48:06.672213       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:48:06.672242       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	E0522 18:32:35.444397       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E0522 18:32:35.445464       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:48:10.913684       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57644: use of closed network connection
	E0522 18:48:11.175047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57696: use of closed network connection
	E0522 18:48:11.423032       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57770: use of closed network connection
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113341    2370 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1c926-1ddd-432d-bfae-23cc2cf1d67e" podNamespace="default" podName="busybox-fc5497c4f-7zbr8"
	May 22 18:36:27 multinode-737786 kubelet[2370]: E0522 18:36:27.113441    2370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113480    2370 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.310549    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2v4\" (UniqueName: \"kubernetes.io/projected/3cb1c926-1ddd-432d-bfae-23cc2cf1d67e-kube-api-access-bt2v4\") pod \"busybox-fc5497c4f-7zbr8\" (UID: \"3cb1c926-1ddd-432d-bfae-23cc2cf1d67e\") " pod="default/busybox-fc5497c4f-7zbr8"
	May 22 18:36:30 multinode-737786 kubelet[2370]: I0522 18:36:30.199164    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7zbr8" podStartSLOduration=1.5746006019999998 podStartE2EDuration="3.199142439s" podCreationTimestamp="2024-05-22 18:36:27 +0000 UTC" firstStartedPulling="2024-05-22 18:36:27.886226491 +0000 UTC m=+230.312796315" lastFinishedPulling="2024-05-22 18:36:29.510768323 +0000 UTC m=+231.937338152" observedRunningTime="2024-05-22 18:36:30.198865287 +0000 UTC m=+232.625435120" watchObservedRunningTime="2024-05-22 18:36:30.199142439 +0000 UTC m=+232.625712274"
	May 22 18:48:11 multinode-737786 kubelet[2370]: E0522 18:48:11.423039    2370 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:55084->[::1]:43097: write tcp [::1]:55084->[::1]:43097: write: broken pipe
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeployApp2Nodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  94s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeployApp2Nodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeployApp2Nodes (706.31s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (2.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-7zbr8 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-7zbr8 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:572: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-737786 -- exec busybox-fc5497c4f-cq58n -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (106.336477ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-cq58n does not have a host assigned

                                                
                                                
** /stderr **
multinode_test.go:574: Pod busybox-fc5497c4f-cq58n could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p mount-start-1-736299                           | mount-start-1-736299 | jenkins | v1.33.1 | 22 May 24 18:32 UTC | 22 May 24 18:32 UTC |
	| start   | -p multinode-737786                               | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:32 UTC |                     |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=docker                                   |                      |         |         |                     |                     |
	|         | --container-runtime=docker                        |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- apply -f                   | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:36 UTC | 22 May 24 18:36 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- rollout                    | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:36 UTC |                     |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n -- nslookup               |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 -- sh                     |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786     | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n                           |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.198952588Z" level=info msg="ignoring event" container=ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.201580059Z" level=info msg="ignoring event" container=b73d925361c0506c710632a45f5377f1a6bdeaf15f268313a07afd0bac2a2011 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284487223Z" level=info msg="ignoring event" container=6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 dockerd[1210]: time="2024-05-22T18:33:06.284636073Z" level=info msg="ignoring event" container=d47b4f1b846de8efc0e1d2a9a093aa1c61b036813c0fa4e6fc255113be2d96f0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:33:06 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:33:06Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ada6e7b25c53306480ec3268f02ae3c0a31843cb50792174aefef87684d072cd/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fefb8ab9046a93fa90099406fe22d3ab5b99d1e81ed91b35c2e7790f7cd2c3c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:36:29 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:29Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         15 minutes ago      Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              15 minutes ago      Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                                         15 minutes ago      Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                                         15 minutes ago      Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                                         15 minutes ago      Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         15 minutes ago      Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         15 minutes ago      Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         15 minutes ago      Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         15 minutes ago      Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:48:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:46:55 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     15m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      15m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           15m   node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.364321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	
	
	==> kernel <==
	 18:48:14 up  1:30,  0 users,  load average: 0.50, 0.20, 0.28
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:46:06.593950       1 main.go:227] handling current node
	I0522 18:46:16.597240       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:16.597262       1 main.go:227] handling current node
	I0522 18:46:26.600369       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:26.600394       1 main.go:227] handling current node
	I0522 18:46:36.603602       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:36.603625       1 main.go:227] handling current node
	I0522 18:46:46.614028       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:46.614050       1 main.go:227] handling current node
	I0522 18:46:56.617533       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:46:56.617557       1 main.go:227] handling current node
	I0522 18:47:06.626038       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:06.626059       1 main.go:227] handling current node
	I0522 18:47:16.629267       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:16.629291       1 main.go:227] handling current node
	I0522 18:47:26.641682       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:26.641705       1 main.go:227] handling current node
	I0522 18:47:36.644822       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:36.644845       1 main.go:227] handling current node
	I0522 18:47:46.656212       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:46.656241       1 main.go:227] handling current node
	I0522 18:47:56.660170       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:47:56.660193       1 main.go:227] handling current node
	I0522 18:48:06.672213       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:48:06.672242       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:48:10.913684       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57644: use of closed network connection
	E0522 18:48:11.175047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57696: use of closed network connection
	E0522 18:48:11.423032       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57770: use of closed network connection
	E0522 18:48:13.525053       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57842: use of closed network connection
	E0522 18:48:13.672815       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57864: use of closed network connection
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113341    2370 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1c926-1ddd-432d-bfae-23cc2cf1d67e" podNamespace="default" podName="busybox-fc5497c4f-7zbr8"
	May 22 18:36:27 multinode-737786 kubelet[2370]: E0522 18:36:27.113441    2370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113480    2370 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.310549    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2v4\" (UniqueName: \"kubernetes.io/projected/3cb1c926-1ddd-432d-bfae-23cc2cf1d67e-kube-api-access-bt2v4\") pod \"busybox-fc5497c4f-7zbr8\" (UID: \"3cb1c926-1ddd-432d-bfae-23cc2cf1d67e\") " pod="default/busybox-fc5497c4f-7zbr8"
	May 22 18:36:30 multinode-737786 kubelet[2370]: I0522 18:36:30.199164    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7zbr8" podStartSLOduration=1.5746006019999998 podStartE2EDuration="3.199142439s" podCreationTimestamp="2024-05-22 18:36:27 +0000 UTC" firstStartedPulling="2024-05-22 18:36:27.886226491 +0000 UTC m=+230.312796315" lastFinishedPulling="2024-05-22 18:36:29.510768323 +0000 UTC m=+231.937338152" observedRunningTime="2024-05-22 18:36:30.198865287 +0000 UTC m=+232.625435120" watchObservedRunningTime="2024-05-22 18:36:30.199142439 +0000 UTC m=+232.625712274"
	May 22 18:48:11 multinode-737786 kubelet[2370]: E0522 18:48:11.423039    2370 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:55084->[::1]:43097: write tcp [::1]:55084->[::1]:43097: write: broken pipe
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  96s (x3 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (2.26s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (247.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-737786 -v 3 --alsologtostderr
E0522 18:51:55.310859   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
multinode_test.go:121: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-737786 -v 3 --alsologtostderr: exit status 80 (4m5.365097455s)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-737786 as [worker]
	* Starting "multinode-737786-m03" worker node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	* Stopping node "multinode-737786-m03"  ...
	* Powering off "multinode-737786-m03" via SSH ...
	* Deleting "multinode-737786-m03" in docker ...
	* Creating docker container (CPUs=2, Memory=2200MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:48:15.581654  176717 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:48:15.581905  176717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:48:15.581915  176717 out.go:304] Setting ErrFile to fd 2...
	I0522 18:48:15.581921  176717 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:48:15.582123  176717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:48:15.582375  176717 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:48:15.582716  176717 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:48:15.583155  176717 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:48:15.599468  176717 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:48:15.599734  176717 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:48:15.648678  176717 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:48:15.639772155 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:48:15.648834  176717 api_server.go:166] Checking apiserver status ...
	I0522 18:48:15.648881  176717 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:48:15.648926  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:48:15.666136  176717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:48:15.753309  176717 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:48:15.761404  176717 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:48:15.761470  176717 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:48:15.768687  176717 api_server.go:204] freezer state: "THAWED"
	I0522 18:48:15.768709  176717 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:48:15.772233  176717 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:48:15.774228  176717 out.go:177] * Adding node m03 to cluster multinode-737786 as [worker]
	I0522 18:48:15.775516  176717 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:48:15.775612  176717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:48:15.777169  176717 out.go:177] * Starting "multinode-737786-m03" worker node in "multinode-737786" cluster
	I0522 18:48:15.778287  176717 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:48:15.779468  176717 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:48:15.780659  176717 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:48:15.780684  176717 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:48:15.780692  176717 cache.go:56] Caching tarball of preloaded images
	I0522 18:48:15.780752  176717 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:48:15.780776  176717 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:48:15.780787  176717 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:48:15.780862  176717 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:48:15.796156  176717 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:48:15.796176  176717 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:48:15.796192  176717 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:48:15.796218  176717 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:48:15.796305  176717 start.go:364] duration metric: took 68.385µs to acquireMachinesLock for "multinode-737786-m03"
	I0522 18:48:15.796325  176717 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0522 18:48:15.796402  176717 start.go:125] createHost starting for "m03" (driver="docker")
	I0522 18:48:15.798333  176717 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:48:15.798440  176717 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:48:15.798467  176717 client.go:168] LocalClient.Create starting
	I0522 18:48:15.798544  176717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:48:15.798574  176717 main.go:141] libmachine: Decoding PEM data...
	I0522 18:48:15.798589  176717 main.go:141] libmachine: Parsing certificate...
	I0522 18:48:15.798630  176717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:48:15.798646  176717 main.go:141] libmachine: Decoding PEM data...
	I0522 18:48:15.798657  176717 main.go:141] libmachine: Parsing certificate...
	I0522 18:48:15.798849  176717 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:48:15.813288  176717 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc00085c180 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:48:15.813328  176717 kic.go:121] calculated static IP "192.168.67.4" for the "multinode-737786-m03" container
	I0522 18:48:15.813385  176717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:48:15.827757  176717 cli_runner.go:164] Run: docker volume create multinode-737786-m03 --label name.minikube.sigs.k8s.io=multinode-737786-m03 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:48:15.843872  176717 oci.go:103] Successfully created a docker volume multinode-737786-m03
	I0522 18:48:15.843968  176717 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m03 --entrypoint /usr/bin/test -v multinode-737786-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:48:16.312706  176717 oci.go:107] Successfully prepared a docker volume multinode-737786-m03
	I0522 18:48:16.312741  176717 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:48:16.312758  176717 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:48:16.312814  176717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:48:20.377778  176717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.064905014s)
	I0522 18:48:20.377808  176717 kic.go:203] duration metric: took 4.065047089s to extract preloaded images to volume ...
	W0522 18:48:20.378101  176717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:48:20.378189  176717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:48:20.420593  176717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m03 --name multinode-737786-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m03 --network multinode-737786 --ip 192.168.67.4 --volume multinode-737786-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:48:20.714790  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Running}}
	I0522 18:48:20.732005  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:48:20.748537  176717 cli_runner.go:164] Run: docker exec multinode-737786-m03 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:48:20.789272  176717 oci.go:144] the created container "multinode-737786-m03" has a running status.
	I0522 18:48:20.789299  176717 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa...
	I0522 18:48:20.934841  176717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:48:20.957511  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:48:20.975567  176717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:48:20.975588  176717 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:48:21.018966  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:48:21.038454  176717 machine.go:94] provisionDockerMachine start ...
	I0522 18:48:21.038543  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:48:21.064663  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:48:21.064897  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32912 <nil> <nil>}
	I0522 18:48:21.064916  176717 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:48:21.298708  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:48:21.298734  176717 ubuntu.go:169] provisioning hostname "multinode-737786-m03"
	I0522 18:48:21.298787  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:48:21.315421  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:48:21.315589  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32912 <nil> <nil>}
	I0522 18:48:21.315603  176717 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m03 && echo "multinode-737786-m03" | sudo tee /etc/hostname
	I0522 18:48:21.441928  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:48:21.442022  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:48:21.458451  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:48:21.458653  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32912 <nil> <nil>}
	I0522 18:48:21.458685  176717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:48:21.571185  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:48:21.571215  176717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:48:21.571288  176717 ubuntu.go:177] setting up certificates
	I0522 18:48:21.571310  176717 provision.go:84] configureAuth start
	I0522 18:48:21.571371  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.586777  176717 provision.go:87] duration metric: took 15.42936ms to configureAuth
	W0522 18:48:21.586802  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.586832  176717 retry.go:31] will retry after 76.055µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.587942  176717 provision.go:84] configureAuth start
	I0522 18:48:21.587997  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.602876  176717 provision.go:87] duration metric: took 14.905706ms to configureAuth
	W0522 18:48:21.602892  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.602909  176717 retry.go:31] will retry after 199.979µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.604020  176717 provision.go:84] configureAuth start
	I0522 18:48:21.604078  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.619699  176717 provision.go:87] duration metric: took 15.659927ms to configureAuth
	W0522 18:48:21.619720  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.619737  176717 retry.go:31] will retry after 283.041µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.620852  176717 provision.go:84] configureAuth start
	I0522 18:48:21.620915  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.636011  176717 provision.go:87] duration metric: took 15.141194ms to configureAuth
	W0522 18:48:21.636030  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.636048  176717 retry.go:31] will retry after 464.777µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.637157  176717 provision.go:84] configureAuth start
	I0522 18:48:21.637309  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.654410  176717 provision.go:87] duration metric: took 17.207031ms to configureAuth
	W0522 18:48:21.654430  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.654446  176717 retry.go:31] will retry after 586.11µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.655556  176717 provision.go:84] configureAuth start
	I0522 18:48:21.655627  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.672446  176717 provision.go:87] duration metric: took 16.869952ms to configureAuth
	W0522 18:48:21.672462  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.672476  176717 retry.go:31] will retry after 691.759µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.673537  176717 provision.go:84] configureAuth start
	I0522 18:48:21.673600  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.688906  176717 provision.go:87] duration metric: took 15.351566ms to configureAuth
	W0522 18:48:21.688923  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.688937  176717 retry.go:31] will retry after 1.144139ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.691119  176717 provision.go:84] configureAuth start
	I0522 18:48:21.691171  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.706122  176717 provision.go:87] duration metric: took 14.986148ms to configureAuth
	W0522 18:48:21.706141  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.706159  176717 retry.go:31] will retry after 2.083457ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.708269  176717 provision.go:84] configureAuth start
	I0522 18:48:21.708333  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.723059  176717 provision.go:87] duration metric: took 14.767303ms to configureAuth
	W0522 18:48:21.723075  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.723091  176717 retry.go:31] will retry after 3.030902ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.726201  176717 provision.go:84] configureAuth start
	I0522 18:48:21.726256  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.741791  176717 provision.go:87] duration metric: took 15.572855ms to configureAuth
	W0522 18:48:21.741808  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.741823  176717 retry.go:31] will retry after 2.393538ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.745037  176717 provision.go:84] configureAuth start
	I0522 18:48:21.745107  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.759589  176717 provision.go:87] duration metric: took 14.528585ms to configureAuth
	W0522 18:48:21.759608  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.759626  176717 retry.go:31] will retry after 5.815959ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.765878  176717 provision.go:84] configureAuth start
	I0522 18:48:21.765962  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.780725  176717 provision.go:87] duration metric: took 14.829799ms to configureAuth
	W0522 18:48:21.780749  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.780766  176717 retry.go:31] will retry after 4.908931ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.785944  176717 provision.go:84] configureAuth start
	I0522 18:48:21.785995  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.800428  176717 provision.go:87] duration metric: took 14.46881ms to configureAuth
	W0522 18:48:21.800444  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.800476  176717 retry.go:31] will retry after 12.414063ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.813663  176717 provision.go:84] configureAuth start
	I0522 18:48:21.813742  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.828633  176717 provision.go:87] duration metric: took 14.953821ms to configureAuth
	W0522 18:48:21.828648  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.828667  176717 retry.go:31] will retry after 26.952464ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.855898  176717 provision.go:84] configureAuth start
	I0522 18:48:21.856008  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.871607  176717 provision.go:87] duration metric: took 15.685982ms to configureAuth
	W0522 18:48:21.871627  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.871659  176717 retry.go:31] will retry after 30.265501ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.902870  176717 provision.go:84] configureAuth start
	I0522 18:48:21.902951  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.918649  176717 provision.go:87] duration metric: took 15.756892ms to configureAuth
	W0522 18:48:21.918666  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.918681  176717 retry.go:31] will retry after 42.283867ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.961981  176717 provision.go:84] configureAuth start
	I0522 18:48:21.962097  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:21.977521  176717 provision.go:87] duration metric: took 15.51233ms to configureAuth
	W0522 18:48:21.977553  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:21.977568  176717 retry.go:31] will retry after 37.811729ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.015762  176717 provision.go:84] configureAuth start
	I0522 18:48:22.015843  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:22.030986  176717 provision.go:87] duration metric: took 15.202235ms to configureAuth
	W0522 18:48:22.031004  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.031019  176717 retry.go:31] will retry after 74.438149ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.106285  176717 provision.go:84] configureAuth start
	I0522 18:48:22.106357  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:22.122240  176717 provision.go:87] duration metric: took 15.925562ms to configureAuth
	W0522 18:48:22.122262  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.122278  176717 retry.go:31] will retry after 157.753023ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.280583  176717 provision.go:84] configureAuth start
	I0522 18:48:22.280659  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:22.296600  176717 provision.go:87] duration metric: took 15.992427ms to configureAuth
	W0522 18:48:22.296630  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.296646  176717 retry.go:31] will retry after 189.892553ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.487011  176717 provision.go:84] configureAuth start
	I0522 18:48:22.487116  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:22.503018  176717 provision.go:87] duration metric: took 15.97994ms to configureAuth
	W0522 18:48:22.503040  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.503058  176717 retry.go:31] will retry after 465.77719ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.969693  176717 provision.go:84] configureAuth start
	I0522 18:48:22.969850  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:22.985565  176717 provision.go:87] duration metric: took 15.840342ms to configureAuth
	W0522 18:48:22.985581  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:22.985597  176717 retry.go:31] will retry after 358.814493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:23.345148  176717 provision.go:84] configureAuth start
	I0522 18:48:23.345235  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:23.361218  176717 provision.go:87] duration metric: took 16.041086ms to configureAuth
	W0522 18:48:23.361240  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:23.361255  176717 retry.go:31] will retry after 1.008947361s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:24.370344  176717 provision.go:84] configureAuth start
	I0522 18:48:24.370469  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:24.385986  176717 provision.go:87] duration metric: took 15.616001ms to configureAuth
	W0522 18:48:24.386006  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:24.386026  176717 retry.go:31] will retry after 895.294757ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:25.282016  176717 provision.go:84] configureAuth start
	I0522 18:48:25.282120  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:25.298049  176717 provision.go:87] duration metric: took 16.007304ms to configureAuth
	W0522 18:48:25.298064  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:25.298079  176717 retry.go:31] will retry after 1.714805256s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:27.013963  176717 provision.go:84] configureAuth start
	I0522 18:48:27.014074  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:27.030913  176717 provision.go:87] duration metric: took 16.924949ms to configureAuth
	W0522 18:48:27.030931  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:27.030946  176717 retry.go:31] will retry after 2.697074721s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:29.730751  176717 provision.go:84] configureAuth start
	I0522 18:48:29.730850  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:29.746407  176717 provision.go:87] duration metric: took 15.63228ms to configureAuth
	W0522 18:48:29.746428  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:29.746448  176717 retry.go:31] will retry after 5.191535273s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:34.940383  176717 provision.go:84] configureAuth start
	I0522 18:48:34.940469  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:34.955690  176717 provision.go:87] duration metric: took 15.281554ms to configureAuth
	W0522 18:48:34.955729  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:34.955748  176717 retry.go:31] will retry after 4.978703843s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:39.937040  176717 provision.go:84] configureAuth start
	I0522 18:48:39.937132  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:39.953756  176717 provision.go:87] duration metric: took 16.690111ms to configureAuth
	W0522 18:48:39.953777  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:39.953795  176717 retry.go:31] will retry after 10.180483681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:50.135335  176717 provision.go:84] configureAuth start
	I0522 18:48:50.135441  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:48:50.151859  176717 provision.go:87] duration metric: took 16.499436ms to configureAuth
	W0522 18:48:50.151876  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:48:50.151891  176717 retry.go:31] will retry after 15.356930373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:49:05.511332  176717 provision.go:84] configureAuth start
	I0522 18:49:05.511412  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:49:05.527631  176717 provision.go:87] duration metric: took 16.271987ms to configureAuth
	W0522 18:49:05.527648  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:49:05.527665  176717 retry.go:31] will retry after 23.481962095s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:49:29.011349  176717 provision.go:84] configureAuth start
	I0522 18:49:29.011438  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:49:29.028371  176717 provision.go:87] duration metric: took 16.993998ms to configureAuth
	W0522 18:49:29.028391  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:49:29.028410  176717 retry.go:31] will retry after 36.398003434s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:05.427344  176717 provision.go:84] configureAuth start
	I0522 18:50:05.427461  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:05.443601  176717 provision.go:87] duration metric: took 16.227701ms to configureAuth
	W0522 18:50:05.443620  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:05.443637  176717 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:05.443643  176717 machine.go:97] duration metric: took 1m44.405169009s to provisionDockerMachine
	I0522 18:50:05.443650  176717 client.go:171] duration metric: took 1m49.645176712s to LocalClient.Create
	I0522 18:50:07.445022  176717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:50:07.445128  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:50:07.461537  176717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32912 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
	I0522 18:50:07.543708  176717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:50:07.547688  176717 start.go:128] duration metric: took 1m51.751262279s to createHost
	I0522 18:50:07.547710  176717 start.go:83] releasing machines lock for "multinode-737786-m03", held for 1m51.751394747s
	W0522 18:50:07.547728  176717 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:07.548117  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:07.564136  176717 stop.go:39] StopHost: multinode-737786-m03
	W0522 18:50:07.564379  176717 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:50:07.566576  176717 out.go:177] * Stopping node "multinode-737786-m03"  ...
	I0522 18:50:07.567951  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	W0522 18:50:07.582645  176717 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:50:07.584233  176717 out.go:177] * Powering off "multinode-737786-m03" via SSH ...
	I0522 18:50:07.585419  176717 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m03 /bin/bash -c "sudo init 0"
	I0522 18:50:08.642097  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:08.657972  176717 oci.go:658] container multinode-737786-m03 status is Stopped
	I0522 18:50:08.658004  176717 oci.go:670] Successfully shutdown container multinode-737786-m03
	I0522 18:50:08.658010  176717 stop.go:96] shutdown container: err=<nil>
	I0522 18:50:08.658029  176717 main.go:141] libmachine: Stopping "multinode-737786-m03"...
	I0522 18:50:08.658078  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:08.674456  176717 stop.go:66] stop err: Machine "multinode-737786-m03" is already stopped.
	I0522 18:50:08.674480  176717 stop.go:69] host is already stopped
	W0522 18:50:09.675323  176717 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:50:09.677367  176717 out.go:177] * Deleting "multinode-737786-m03" in docker ...
	I0522 18:50:09.678660  176717 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m03
	I0522 18:50:09.694942  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:09.710209  176717 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m03 /bin/bash -c "sudo init 0"
	W0522 18:50:09.725606  176717 cli_runner.go:211] docker exec --privileged -t multinode-737786-m03 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:50:09.725642  176717 oci.go:650] error shutdown multinode-737786-m03: docker exec --privileged -t multinode-737786-m03 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 0ab8e50fd321b406915e2e2559233ee54e4c9d74878de8e6812194bd42d6ff13 is not running
	I0522 18:50:10.725802  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:10.741641  176717 oci.go:658] container multinode-737786-m03 status is Stopped
	I0522 18:50:10.741667  176717 oci.go:670] Successfully shutdown container multinode-737786-m03
	I0522 18:50:10.741704  176717 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m03
	I0522 18:50:10.762192  176717 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m03
	W0522 18:50:10.776434  176717 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m03 returned with exit code 1
	I0522 18:50:10.776509  176717 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:50:10.791567  176717 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:50:10.810375  176717 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:50:10.810463  176717 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:50:10.810661  176717 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:10.810679  176717 start.go:728] Will try again in 5 seconds ...
	I0522 18:50:15.810790  176717 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:50:15.810891  176717 start.go:364] duration metric: took 69.28µs to acquireMachinesLock for "multinode-737786-m03"
	I0522 18:50:15.810923  176717 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0
CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0522 18:50:15.811006  176717 start.go:125] createHost starting for "m03" (driver="docker")
	I0522 18:50:15.813101  176717 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:50:15.813198  176717 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:50:15.813216  176717 client.go:168] LocalClient.Create starting
	I0522 18:50:15.813271  176717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:50:15.813303  176717 main.go:141] libmachine: Decoding PEM data...
	I0522 18:50:15.813317  176717 main.go:141] libmachine: Parsing certificate...
	I0522 18:50:15.813383  176717 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:50:15.813401  176717 main.go:141] libmachine: Decoding PEM data...
	I0522 18:50:15.813419  176717 main.go:141] libmachine: Parsing certificate...
	I0522 18:50:15.813606  176717 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:50:15.829269  176717 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc00085d5c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:50:15.829303  176717 kic.go:121] calculated static IP "192.168.67.4" for the "multinode-737786-m03" container
	I0522 18:50:15.829352  176717 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:50:15.845023  176717 cli_runner.go:164] Run: docker volume create multinode-737786-m03 --label name.minikube.sigs.k8s.io=multinode-737786-m03 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:50:15.858760  176717 oci.go:103] Successfully created a docker volume multinode-737786-m03
	I0522 18:50:15.858844  176717 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m03 --entrypoint /usr/bin/test -v multinode-737786-m03:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:50:16.250969  176717 oci.go:107] Successfully prepared a docker volume multinode-737786-m03
	I0522 18:50:16.251013  176717 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:50:16.251032  176717 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:50:16.251091  176717 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:50:20.505214  176717 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m03:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.254088161s)
	I0522 18:50:20.505246  176717 kic.go:203] duration metric: took 4.254211291s to extract preloaded images to volume ...
	W0522 18:50:20.505382  176717 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:50:20.505487  176717 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:50:20.550322  176717 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m03 --name multinode-737786-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m03 --network multinode-737786 --ip 192.168.67.4 --volume multinode-737786-m03:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:50:20.827162  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Running}}
	I0522 18:50:20.843985  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:20.861882  176717 cli_runner.go:164] Run: docker exec multinode-737786-m03 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:50:20.903029  176717 oci.go:144] the created container "multinode-737786-m03" has a running status.
	I0522 18:50:20.903061  176717 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa...
	I0522 18:50:20.980195  176717 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:50:20.999228  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:21.014945  176717 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:50:21.014971  176717 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:50:21.052603  176717 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:50:21.069461  176717 machine.go:94] provisionDockerMachine start ...
	I0522 18:50:21.069556  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:50:21.087806  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:50:21.088009  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0522 18:50:21.088022  176717 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:50:21.088680  176717 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32917: read: connection reset by peer
	I0522 18:50:24.202443  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:50:24.202467  176717 ubuntu.go:169] provisioning hostname "multinode-737786-m03"
	I0522 18:50:24.202523  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:50:24.218541  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:50:24.218724  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0522 18:50:24.218743  176717 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m03 && echo "multinode-737786-m03" | sudo tee /etc/hostname
	I0522 18:50:24.341289  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:50:24.341357  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:50:24.357589  176717 main.go:141] libmachine: Using SSH client type: native
	I0522 18:50:24.357759  176717 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32917 <nil> <nil>}
	I0522 18:50:24.357775  176717 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:50:24.471091  176717 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:50:24.471117  176717 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:50:24.471131  176717 ubuntu.go:177] setting up certificates
	I0522 18:50:24.471141  176717 provision.go:84] configureAuth start
	I0522 18:50:24.471187  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.486638  176717 provision.go:87] duration metric: took 15.48906ms to configureAuth
	W0522 18:50:24.486657  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.486672  176717 retry.go:31] will retry after 76.541µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.487786  176717 provision.go:84] configureAuth start
	I0522 18:50:24.487841  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.502788  176717 provision.go:87] duration metric: took 14.985692ms to configureAuth
	W0522 18:50:24.502805  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.502820  176717 retry.go:31] will retry after 175.871µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.503927  176717 provision.go:84] configureAuth start
	I0522 18:50:24.503981  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.519075  176717 provision.go:87] duration metric: took 15.129794ms to configureAuth
	W0522 18:50:24.519094  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.519110  176717 retry.go:31] will retry after 287.704µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.520213  176717 provision.go:84] configureAuth start
	I0522 18:50:24.520267  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.535000  176717 provision.go:87] duration metric: took 14.772116ms to configureAuth
	W0522 18:50:24.535017  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.535033  176717 retry.go:31] will retry after 408.281µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.536139  176717 provision.go:84] configureAuth start
	I0522 18:50:24.536194  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.550917  176717 provision.go:87] duration metric: took 14.759298ms to configureAuth
	W0522 18:50:24.550939  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.550955  176717 retry.go:31] will retry after 749.509µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.552065  176717 provision.go:84] configureAuth start
	I0522 18:50:24.552125  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.567398  176717 provision.go:87] duration metric: took 15.316904ms to configureAuth
	W0522 18:50:24.567417  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.567435  176717 retry.go:31] will retry after 381.955µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.568553  176717 provision.go:84] configureAuth start
	I0522 18:50:24.568616  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.583367  176717 provision.go:87] duration metric: took 14.795345ms to configureAuth
	W0522 18:50:24.583384  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.583398  176717 retry.go:31] will retry after 1.400054ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.585573  176717 provision.go:84] configureAuth start
	I0522 18:50:24.585634  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.599949  176717 provision.go:87] duration metric: took 14.361311ms to configureAuth
	W0522 18:50:24.599965  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.599980  176717 retry.go:31] will retry after 988.208µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.601089  176717 provision.go:84] configureAuth start
	I0522 18:50:24.601148  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.618955  176717 provision.go:87] duration metric: took 17.848835ms to configureAuth
	W0522 18:50:24.618974  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.618991  176717 retry.go:31] will retry after 2.101461ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.621119  176717 provision.go:84] configureAuth start
	I0522 18:50:24.621180  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.637406  176717 provision.go:87] duration metric: took 16.26519ms to configureAuth
	W0522 18:50:24.637423  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.637438  176717 retry.go:31] will retry after 4.476559ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.642622  176717 provision.go:84] configureAuth start
	I0522 18:50:24.642692  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.657868  176717 provision.go:87] duration metric: took 15.226831ms to configureAuth
	W0522 18:50:24.657884  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.657902  176717 retry.go:31] will retry after 3.594016ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.662085  176717 provision.go:84] configureAuth start
	I0522 18:50:24.662141  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.677340  176717 provision.go:87] duration metric: took 15.23724ms to configureAuth
	W0522 18:50:24.677358  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.677373  176717 retry.go:31] will retry after 9.45115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.687566  176717 provision.go:84] configureAuth start
	I0522 18:50:24.687650  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.703060  176717 provision.go:87] duration metric: took 15.468852ms to configureAuth
	W0522 18:50:24.703082  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.703102  176717 retry.go:31] will retry after 7.432348ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.711327  176717 provision.go:84] configureAuth start
	I0522 18:50:24.711401  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.726618  176717 provision.go:87] duration metric: took 15.264804ms to configureAuth
	W0522 18:50:24.726634  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.726651  176717 retry.go:31] will retry after 12.636762ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.739858  176717 provision.go:84] configureAuth start
	I0522 18:50:24.739949  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.756006  176717 provision.go:87] duration metric: took 16.129499ms to configureAuth
	W0522 18:50:24.756024  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.756040  176717 retry.go:31] will retry after 33.375154ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.790248  176717 provision.go:84] configureAuth start
	I0522 18:50:24.790345  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.805845  176717 provision.go:87] duration metric: took 15.568952ms to configureAuth
	W0522 18:50:24.805864  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.805883  176717 retry.go:31] will retry after 64.186467ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.871181  176717 provision.go:84] configureAuth start
	I0522 18:50:24.871304  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.887220  176717 provision.go:87] duration metric: took 16.001771ms to configureAuth
	W0522 18:50:24.887242  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.887262  176717 retry.go:31] will retry after 41.382508ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.929476  176717 provision.go:84] configureAuth start
	I0522 18:50:24.929584  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:24.945483  176717 provision.go:87] duration metric: took 15.981704ms to configureAuth
	W0522 18:50:24.945501  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:24.945516  176717 retry.go:31] will retry after 55.130964ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.001785  176717 provision.go:84] configureAuth start
	I0522 18:50:25.001865  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:25.017784  176717 provision.go:87] duration metric: took 15.971902ms to configureAuth
	W0522 18:50:25.017802  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.017817  176717 retry.go:31] will retry after 195.484176ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.214174  176717 provision.go:84] configureAuth start
	I0522 18:50:25.214278  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:25.230389  176717 provision.go:87] duration metric: took 16.164833ms to configureAuth
	W0522 18:50:25.230407  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.230424  176717 retry.go:31] will retry after 320.423656ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.551900  176717 provision.go:84] configureAuth start
	I0522 18:50:25.551987  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:25.568037  176717 provision.go:87] duration metric: took 16.111521ms to configureAuth
	W0522 18:50:25.568056  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.568071  176717 retry.go:31] will retry after 222.823294ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.791484  176717 provision.go:84] configureAuth start
	I0522 18:50:25.791554  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:25.808968  176717 provision.go:87] duration metric: took 17.442413ms to configureAuth
	W0522 18:50:25.809094  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:25.809165  176717 retry.go:31] will retry after 297.130566ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:26.106573  176717 provision.go:84] configureAuth start
	I0522 18:50:26.106646  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:26.122513  176717 provision.go:87] duration metric: took 15.914011ms to configureAuth
	W0522 18:50:26.122532  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:26.122546  176717 retry.go:31] will retry after 853.782655ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:26.976497  176717 provision.go:84] configureAuth start
	I0522 18:50:26.976587  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:26.992710  176717 provision.go:87] duration metric: took 16.172695ms to configureAuth
	W0522 18:50:26.992729  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:26.992757  176717 retry.go:31] will retry after 1.237212682s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:28.231087  176717 provision.go:84] configureAuth start
	I0522 18:50:28.231180  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:28.247317  176717 provision.go:87] duration metric: took 16.189445ms to configureAuth
	W0522 18:50:28.247337  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:28.247352  176717 retry.go:31] will retry after 1.803380056s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:30.051330  176717 provision.go:84] configureAuth start
	I0522 18:50:30.051416  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:30.067214  176717 provision.go:87] duration metric: took 15.858328ms to configureAuth
	W0522 18:50:30.067234  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:30.067255  176717 retry.go:31] will retry after 2.707272829s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:32.776080  176717 provision.go:84] configureAuth start
	I0522 18:50:32.776157  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:32.792228  176717 provision.go:87] duration metric: took 16.121913ms to configureAuth
	W0522 18:50:32.792247  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:32.792263  176717 retry.go:31] will retry after 3.232048123s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:36.024780  176717 provision.go:84] configureAuth start
	I0522 18:50:36.024874  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:36.040636  176717 provision.go:87] duration metric: took 15.830448ms to configureAuth
	W0522 18:50:36.040658  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:36.040678  176717 retry.go:31] will retry after 3.84770251s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:39.891328  176717 provision.go:84] configureAuth start
	I0522 18:50:39.891443  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:39.906934  176717 provision.go:87] duration metric: took 15.57955ms to configureAuth
	W0522 18:50:39.906952  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:39.906968  176717 retry.go:31] will retry after 6.128180225s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:46.036131  176717 provision.go:84] configureAuth start
	I0522 18:50:46.036240  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:46.052350  176717 provision.go:87] duration metric: took 16.190803ms to configureAuth
	W0522 18:50:46.052370  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:46.052386  176717 retry.go:31] will retry after 7.05005049s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:53.104227  176717 provision.go:84] configureAuth start
	I0522 18:50:53.104322  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:50:53.120805  176717 provision.go:87] duration metric: took 16.548876ms to configureAuth
	W0522 18:50:53.120823  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:50:53.120839  176717 retry.go:31] will retry after 13.724383154s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:51:06.846009  176717 provision.go:84] configureAuth start
	I0522 18:51:06.846105  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:51:06.862832  176717 provision.go:87] duration metric: took 16.797646ms to configureAuth
	W0522 18:51:06.862849  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:51:06.862865  176717 retry.go:31] will retry after 17.817154004s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:51:24.683333  176717 provision.go:84] configureAuth start
	I0522 18:51:24.683432  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:51:24.700116  176717 provision.go:87] duration metric: took 16.754815ms to configureAuth
	W0522 18:51:24.700136  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:51:24.700153  176717 retry.go:31] will retry after 54.076158285s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:18.776462  176717 provision.go:84] configureAuth start
	I0522 18:52:18.777019  176717 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:18.793730  176717 provision.go:87] duration metric: took 16.987964ms to configureAuth
	W0522 18:52:18.793749  176717 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:18.793765  176717 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:18.793771  176717 machine.go:97] duration metric: took 1m57.724291995s to provisionDockerMachine
	I0522 18:52:18.793778  176717 client.go:171] duration metric: took 2m2.980557518s to LocalClient.Create
	I0522 18:52:20.794106  176717 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:52:20.794159  176717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:52:20.810786  176717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32917 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
	I0522 18:52:20.896153  176717 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:52:20.900145  176717 start.go:128] duration metric: took 2m5.089126762s to createHost
	I0522 18:52:20.900170  176717 start.go:83] releasing machines lock for "multinode-737786-m03", held for 2m5.089266498s
	W0522 18:52:20.900252  176717 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:20.902321  176717 out.go:177] 
	W0522 18:52:20.903486  176717 out.go:239] X Exiting due to GUEST_NODE_ADD: failed to add node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_NODE_ADD: failed to add node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:52:20.903498  176717 out.go:239] * 
	* 
	W0522 18:52:20.905540  176717 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:52:20.906641  176717 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:123: failed to add node to current cluster. args "out/minikube-linux-amd64 node add -p multinode-737786 -v 3 --alsologtostderr" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/AddNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/AddNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/AddNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/AddNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| start   | -p multinode-737786                               | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:32 UTC |                     |
	|         | --wait=true --memory=2200                         |                  |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                  |         |         |                     |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|         | --driver=docker                                   |                  |         |         |                     |                     |
	|         | --container-runtime=docker                        |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- apply -f                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:36 UTC | 22 May 24 18:36 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- rollout                    | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:36 UTC |                     |
	|         | status deployment/busybox                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:46 UTC | 22 May 24 18:46 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:47 UTC | 22 May 24 18:47 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n --                        |                  |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n -- nslookup               |                  |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- get pods -o                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC | 22 May 24 18:48 UTC |
	|         | busybox-fc5497c4f-7zbr8 -- sh                     |                  |         |         |                     |                     |
	|         | -c ping -c 1 192.168.67.1                         |                  |         |         |                     |                     |
	| kubectl | -p multinode-737786 -- exec                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | busybox-fc5497c4f-cq58n                           |                  |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                  |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                  |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                  |         |         |                     |                     |
	| node    | add -p multinode-737786 -v 3                      | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:48 UTC |                     |
	|         | --alsologtostderr                                 |                  |         |         |                     |                     |
	|---------|---------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:26 multinode-737786 dockerd[1210]: 2024/05/22 18:36:26 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:36:27 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:27Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fefb8ab9046a93fa90099406fe22d3ab5b99d1e81ed91b35c2e7790f7cd2c3c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:36:29 multinode-737786 cri-dockerd[1429]: time="2024-05-22T18:36:29Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   15 minutes ago      Running             busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         19 minutes ago      Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              19 minutes ago      Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                                         19 minutes ago      Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                                         19 minutes ago      Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                                         19 minutes ago      Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         19 minutes ago      Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         19 minutes ago      Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         19 minutes ago      Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         19 minutes ago      Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:52:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     19m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      19m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m   node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.364321Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364376Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364414Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 1"}
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	
	
	==> kernel <==
	 18:52:21 up  1:34,  0 users,  load average: 0.45, 0.37, 0.34
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:50:16.755613       1 main.go:227] handling current node
	I0522 18:50:26.759059       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:26.759084       1 main.go:227] handling current node
	I0522 18:50:36.762736       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:36.762759       1 main.go:227] handling current node
	I0522 18:50:46.774779       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:46.774803       1 main.go:227] handling current node
	I0522 18:50:56.778426       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:56.778448       1 main.go:227] handling current node
	I0522 18:51:06.790552       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:06.790575       1 main.go:227] handling current node
	I0522 18:51:16.793643       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:16.793670       1 main.go:227] handling current node
	I0522 18:51:26.796584       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:26.796608       1 main.go:227] handling current node
	I0522 18:51:36.801455       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:36.801477       1 main.go:227] handling current node
	I0522 18:51:46.810361       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:46.810385       1 main.go:227] handling current node
	I0522 18:51:56.813665       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:56.813687       1 main.go:227] handling current node
	I0522 18:52:06.822432       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:52:06.822458       1 main.go:227] handling current node
	I0522 18:52:16.826079       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:52:16.826100       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:48:10.913684       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57644: use of closed network connection
	E0522 18:48:11.175047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57696: use of closed network connection
	E0522 18:48:11.423032       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57770: use of closed network connection
	E0522 18:48:13.525053       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57842: use of closed network connection
	E0522 18:48:13.672815       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57864: use of closed network connection
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113341    2370 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1c926-1ddd-432d-bfae-23cc2cf1d67e" podNamespace="default" podName="busybox-fc5497c4f-7zbr8"
	May 22 18:36:27 multinode-737786 kubelet[2370]: E0522 18:36:27.113441    2370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113480    2370 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.310549    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2v4\" (UniqueName: \"kubernetes.io/projected/3cb1c926-1ddd-432d-bfae-23cc2cf1d67e-kube-api-access-bt2v4\") pod \"busybox-fc5497c4f-7zbr8\" (UID: \"3cb1c926-1ddd-432d-bfae-23cc2cf1d67e\") " pod="default/busybox-fc5497c4f-7zbr8"
	May 22 18:36:30 multinode-737786 kubelet[2370]: I0522 18:36:30.199164    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7zbr8" podStartSLOduration=1.5746006019999998 podStartE2EDuration="3.199142439s" podCreationTimestamp="2024-05-22 18:36:27 +0000 UTC" firstStartedPulling="2024-05-22 18:36:27.886226491 +0000 UTC m=+230.312796315" lastFinishedPulling="2024-05-22 18:36:29.510768323 +0000 UTC m=+231.937338152" observedRunningTime="2024-05-22 18:36:30.198865287 +0000 UTC m=+232.625435120" watchObservedRunningTime="2024-05-22 18:36:30.199142439 +0000 UTC m=+232.625712274"
	May 22 18:48:11 multinode-737786 kubelet[2370]: E0522 18:48:11.423039    2370 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:55084->[::1]:43097: write tcp [::1]:55084->[::1]:43097: write: broken pipe
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/AddNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  43s (x4 over 15m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/AddNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/AddNode (247.13s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-737786 node stop m03: (1.158620457s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status: exit status 7 (303.285072ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0522 18:52:32.411461  185430 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	E0522 18:52:32.411496  185430 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr: exit status 7 (309.808659ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:52:32.465403  185557 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:52:32.465661  185557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:32.465671  185557 out.go:304] Setting ErrFile to fd 2...
	I0522 18:52:32.465675  185557 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:32.465825  185557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:52:32.465979  185557 out.go:298] Setting JSON to false
	I0522 18:52:32.466003  185557 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:52:32.466044  185557 notify.go:220] Checking for updates...
	I0522 18:52:32.466453  185557 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:52:32.466472  185557 status.go:255] checking status of multinode-737786 ...
	I0522 18:52:32.466887  185557 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:52:32.483946  185557 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:52:32.483967  185557 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:52:32.484232  185557 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:52:32.499284  185557 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:52:32.499515  185557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:52:32.499561  185557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:52:32.515407  185557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:52:32.595831  185557 ssh_runner.go:195] Run: systemctl --version
	I0522 18:52:32.599568  185557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:52:32.609222  185557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:52:32.655240  185557 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:42 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:52:32.646665631 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:52:32.655778  185557 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:52:32.655805  185557 api_server.go:166] Checking apiserver status ...
	I0522 18:52:32.655832  185557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:52:32.666230  185557 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:52:32.674678  185557 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:52:32.674765  185557 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:52:32.682130  185557 api_server.go:204] freezer state: "THAWED"
	I0522 18:52:32.682151  185557 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:52:32.685721  185557 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:52:32.685741  185557 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:52:32.685751  185557 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:52:32.685773  185557 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:52:32.685974  185557 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:52:32.702118  185557 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:52:32.702139  185557 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:52:32.702393  185557 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:52:32.721264  185557 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:52:32.721295  185557 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:52:32.721318  185557 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:52:32.721326  185557 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:52:32.721649  185557 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:52:32.736754  185557 status.go:330] multinode-737786-m03 host status = "Stopped" (err=<nil>)
	I0522 18:52:32.736774  185557 status.go:343] host is not running, skipping remaining checks
	I0522 18:52:32.736782  185557 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:267: incorrect number of running kubelets: args "out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr": multinode-737786
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
multinode-737786-m02
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
multinode-737786-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/StopNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/StopNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786_multinode-737786-m02.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786:/home/docker/cp-test.txt                           | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786_multinode-737786-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786_multinode-737786-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-737786 node stop m03                                                          | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:12 multinode-737786 dockerd[1210]: 2024/05/22 18:48:12 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   16 minutes ago      Running             busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         19 minutes ago      Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              19 minutes ago      Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                                         19 minutes ago      Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                                         19 minutes ago      Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                                         19 minutes ago      Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         20 minutes ago      Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         20 minutes ago      Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         20 minutes ago      Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         20 minutes ago      Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:52:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     19m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      19m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 19m   kube-proxy       
	  Normal  Starting                 19m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m   kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m   kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m   kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m   node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	{"level":"info","ts":"2024-05-22T18:52:33.678754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1153}
	{"level":"info","ts":"2024-05-22T18:52:33.681122Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1153,"took":"2.100554ms","hash":435437424,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:52:33.681165Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435437424,"revision":1153,"compact-revision":911}
	
	
	==> kernel <==
	 18:52:33 up  1:34,  0 users,  load average: 0.54, 0.39, 0.34
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:50:26.759084       1 main.go:227] handling current node
	I0522 18:50:36.762736       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:36.762759       1 main.go:227] handling current node
	I0522 18:50:46.774779       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:46.774803       1 main.go:227] handling current node
	I0522 18:50:56.778426       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:50:56.778448       1 main.go:227] handling current node
	I0522 18:51:06.790552       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:06.790575       1 main.go:227] handling current node
	I0522 18:51:16.793643       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:16.793670       1 main.go:227] handling current node
	I0522 18:51:26.796584       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:26.796608       1 main.go:227] handling current node
	I0522 18:51:36.801455       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:36.801477       1 main.go:227] handling current node
	I0522 18:51:46.810361       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:46.810385       1 main.go:227] handling current node
	I0522 18:51:56.813665       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:51:56.813687       1 main.go:227] handling current node
	I0522 18:52:06.822432       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:52:06.822458       1 main.go:227] handling current node
	I0522 18:52:16.826079       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:52:16.826100       1 main.go:227] handling current node
	I0522 18:52:26.829629       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:52:26.829650       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:48:10.913684       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57644: use of closed network connection
	E0522 18:48:11.175047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57696: use of closed network connection
	E0522 18:48:11.423032       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57770: use of closed network connection
	E0522 18:48:13.525053       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57842: use of closed network connection
	E0522 18:48:13.672815       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57864: use of closed network connection
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113341    2370 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1c926-1ddd-432d-bfae-23cc2cf1d67e" podNamespace="default" podName="busybox-fc5497c4f-7zbr8"
	May 22 18:36:27 multinode-737786 kubelet[2370]: E0522 18:36:27.113441    2370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113480    2370 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.310549    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2v4\" (UniqueName: \"kubernetes.io/projected/3cb1c926-1ddd-432d-bfae-23cc2cf1d67e-kube-api-access-bt2v4\") pod \"busybox-fc5497c4f-7zbr8\" (UID: \"3cb1c926-1ddd-432d-bfae-23cc2cf1d67e\") " pod="default/busybox-fc5497c4f-7zbr8"
	May 22 18:36:30 multinode-737786 kubelet[2370]: I0522 18:36:30.199164    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7zbr8" podStartSLOduration=1.5746006019999998 podStartE2EDuration="3.199142439s" podCreationTimestamp="2024-05-22 18:36:27 +0000 UTC" firstStartedPulling="2024-05-22 18:36:27.886226491 +0000 UTC m=+230.312796315" lastFinishedPulling="2024-05-22 18:36:29.510768323 +0000 UTC m=+231.937338152" observedRunningTime="2024-05-22 18:36:30.198865287 +0000 UTC m=+232.625435120" watchObservedRunningTime="2024-05-22 18:36:30.199142439 +0000 UTC m=+232.625712274"
	May 22 18:48:11 multinode-737786 kubelet[2370]: E0522 18:48:11.423039    2370 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:55084->[::1]:43097: write tcp [::1]:55084->[::1]:43097: write: broken pipe
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/StopNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  55s (x4 over 16m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/StopNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopNode (3.49s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (162.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 node start m03 -v=7 --alsologtostderr: exit status 80 (1m47.98850117s)

                                                
                                                
-- stdout --
	* Starting "multinode-737786-m03" worker node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "multinode-737786-m03" ...
	* Updating the running docker "multinode-737786-m03" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:52:34.499498  186216 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:52:34.499788  186216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:34.499798  186216 out.go:304] Setting ErrFile to fd 2...
	I0522 18:52:34.499802  186216 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:34.499975  186216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:52:34.500200  186216 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:52:34.500505  186216 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:52:34.500856  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	W0522 18:52:34.516641  186216 host.go:58] "multinode-737786-m03" host status: Stopped
	I0522 18:52:34.518693  186216 out.go:177] * Starting "multinode-737786-m03" worker node in "multinode-737786" cluster
	I0522 18:52:34.520012  186216 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:52:34.521116  186216 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:52:34.522115  186216 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:52:34.522170  186216 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:52:34.522181  186216 cache.go:56] Caching tarball of preloaded images
	I0522 18:52:34.522239  186216 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:52:34.522267  186216 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:52:34.522277  186216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:52:34.522375  186216 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:52:34.537515  186216 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:52:34.537539  186216 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:52:34.537555  186216 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:52:34.537585  186216 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:52:34.537654  186216 start.go:364] duration metric: took 49.187µs to acquireMachinesLock for "multinode-737786-m03"
	I0522 18:52:34.537671  186216 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:52:34.537681  186216 fix.go:54] fixHost starting: m03
	I0522 18:52:34.537884  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:52:34.552890  186216 fix.go:112] recreateIfNeeded on multinode-737786-m03: state=Stopped err=<nil>
	W0522 18:52:34.552923  186216 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:52:34.554600  186216 out.go:177] * Restarting existing docker container for "multinode-737786-m03" ...
	I0522 18:52:34.555722  186216 cli_runner.go:164] Run: docker start multinode-737786-m03
	I0522 18:52:34.843032  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:52:34.860488  186216 kic.go:430] container "multinode-737786-m03" state is running.
	I0522 18:52:34.860861  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:34.878795  186216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:52:34.878846  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:52:34.895329  186216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32922 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
	W0522 18:52:34.896182  186216 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60120->127.0.0.1:32922: read: connection reset by peer
	I0522 18:52:34.896211  186216 retry.go:31] will retry after 280.923979ms: ssh: handshake failed: read tcp 127.0.0.1:60120->127.0.0.1:32922: read: connection reset by peer
	I0522 18:52:35.347503  186216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:52:35.351845  186216 fix.go:56] duration metric: took 814.159468ms for fixHost
	I0522 18:52:35.351870  186216 start.go:83] releasing machines lock for "multinode-737786-m03", held for 814.20437ms
	W0522 18:52:35.351887  186216 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:52:35.351950  186216 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:52:35.351964  186216 start.go:728] Will try again in 5 seconds ...
	I0522 18:52:40.352666  186216 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:52:40.352781  186216 start.go:364] duration metric: took 77.93µs to acquireMachinesLock for "multinode-737786-m03"
	I0522 18:52:40.352808  186216 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:52:40.352839  186216 fix.go:54] fixHost starting: m03
	I0522 18:52:40.353169  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:52:40.369496  186216 fix.go:112] recreateIfNeeded on multinode-737786-m03: state=Running err=<nil>
	W0522 18:52:40.369520  186216 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:52:40.371369  186216 out.go:177] * Updating the running docker "multinode-737786-m03" container ...
	I0522 18:52:40.372645  186216 machine.go:94] provisionDockerMachine start ...
	I0522 18:52:40.372706  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:52:40.388430  186216 main.go:141] libmachine: Using SSH client type: native
	I0522 18:52:40.388613  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
	I0522 18:52:40.388624  186216 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:52:40.498577  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:52:40.498604  186216 ubuntu.go:169] provisioning hostname "multinode-737786-m03"
	I0522 18:52:40.498680  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:52:40.514753  186216 main.go:141] libmachine: Using SSH client type: native
	I0522 18:52:40.514914  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
	I0522 18:52:40.514927  186216 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m03 && echo "multinode-737786-m03" | sudo tee /etc/hostname
	I0522 18:52:40.637071  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03
	
	I0522 18:52:40.637147  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:52:40.653594  186216 main.go:141] libmachine: Using SSH client type: native
	I0522 18:52:40.653761  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
	I0522 18:52:40.653779  186216 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:52:40.767123  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:52:40.767157  186216 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:52:40.767191  186216 ubuntu.go:177] setting up certificates
	I0522 18:52:40.767210  186216 provision.go:84] configureAuth start
	I0522 18:52:40.767305  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.783123  186216 provision.go:87] duration metric: took 15.898551ms to configureAuth
	W0522 18:52:40.783147  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.783165  186216 retry.go:31] will retry after 92.722µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.784289  186216 provision.go:84] configureAuth start
	I0522 18:52:40.784358  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.800361  186216 provision.go:87] duration metric: took 16.050976ms to configureAuth
	W0522 18:52:40.800377  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.800390  186216 retry.go:31] will retry after 109.642µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.801503  186216 provision.go:84] configureAuth start
	I0522 18:52:40.801562  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.816440  186216 provision.go:87] duration metric: took 14.920636ms to configureAuth
	W0522 18:52:40.816456  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.816470  186216 retry.go:31] will retry after 132.251µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.817571  186216 provision.go:84] configureAuth start
	I0522 18:52:40.817621  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.832569  186216 provision.go:87] duration metric: took 14.981286ms to configureAuth
	W0522 18:52:40.832587  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.832602  186216 retry.go:31] will retry after 281.402µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.833743  186216 provision.go:84] configureAuth start
	I0522 18:52:40.833817  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.848761  186216 provision.go:87] duration metric: took 14.998295ms to configureAuth
	W0522 18:52:40.848777  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.848792  186216 retry.go:31] will retry after 429.618µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.849900  186216 provision.go:84] configureAuth start
	I0522 18:52:40.849949  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.864777  186216 provision.go:87] duration metric: took 14.860951ms to configureAuth
	W0522 18:52:40.864791  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.864806  186216 retry.go:31] will retry after 534.658µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.865913  186216 provision.go:84] configureAuth start
	I0522 18:52:40.865962  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.880844  186216 provision.go:87] duration metric: took 14.915901ms to configureAuth
	W0522 18:52:40.880861  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.880875  186216 retry.go:31] will retry after 611.036µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.881982  186216 provision.go:84] configureAuth start
	I0522 18:52:40.882037  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.897110  186216 provision.go:87] duration metric: took 15.110213ms to configureAuth
	W0522 18:52:40.897131  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.897148  186216 retry.go:31] will retry after 1.54457ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.899319  186216 provision.go:84] configureAuth start
	I0522 18:52:40.899372  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.914235  186216 provision.go:87] duration metric: took 14.900151ms to configureAuth
	W0522 18:52:40.914251  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.914268  186216 retry.go:31] will retry after 2.780469ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.917442  186216 provision.go:84] configureAuth start
	I0522 18:52:40.917493  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.932538  186216 provision.go:87] duration metric: took 15.080164ms to configureAuth
	W0522 18:52:40.932555  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.932570  186216 retry.go:31] will retry after 5.701542ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.938768  186216 provision.go:84] configureAuth start
	I0522 18:52:40.938834  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.953194  186216 provision.go:87] duration metric: took 14.409025ms to configureAuth
	W0522 18:52:40.953210  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.953223  186216 retry.go:31] will retry after 8.510158ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.962399  186216 provision.go:84] configureAuth start
	I0522 18:52:40.962453  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.976776  186216 provision.go:87] duration metric: took 14.360556ms to configureAuth
	W0522 18:52:40.976792  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.976806  186216 retry.go:31] will retry after 5.736299ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.983002  186216 provision.go:84] configureAuth start
	I0522 18:52:40.983052  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:40.998265  186216 provision.go:87] duration metric: took 15.24594ms to configureAuth
	W0522 18:52:40.998286  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:40.998303  186216 retry.go:31] will retry after 10.172875ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.009482  186216 provision.go:84] configureAuth start
	I0522 18:52:41.009558  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.025079  186216 provision.go:87] duration metric: took 15.578075ms to configureAuth
	W0522 18:52:41.025095  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.025108  186216 retry.go:31] will retry after 27.142951ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.053296  186216 provision.go:84] configureAuth start
	I0522 18:52:41.053364  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.069551  186216 provision.go:87] duration metric: took 16.232787ms to configureAuth
	W0522 18:52:41.069573  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.069590  186216 retry.go:31] will retry after 31.486219ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.101802  186216 provision.go:84] configureAuth start
	I0522 18:52:41.101887  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.117772  186216 provision.go:87] duration metric: took 15.942306ms to configureAuth
	W0522 18:52:41.117790  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.117813  186216 retry.go:31] will retry after 50.369704ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.169019  186216 provision.go:84] configureAuth start
	I0522 18:52:41.169105  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.184538  186216 provision.go:87] duration metric: took 15.49306ms to configureAuth
	W0522 18:52:41.184557  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.184571  186216 retry.go:31] will retry after 75.82481ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.260804  186216 provision.go:84] configureAuth start
	I0522 18:52:41.260876  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.277104  186216 provision.go:87] duration metric: took 16.273529ms to configureAuth
	W0522 18:52:41.277125  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.277140  186216 retry.go:31] will retry after 101.284492ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.379392  186216 provision.go:84] configureAuth start
	I0522 18:52:41.379493  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.395845  186216 provision.go:87] duration metric: took 16.426966ms to configureAuth
	W0522 18:52:41.395865  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.395880  186216 retry.go:31] will retry after 184.903748ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.581229  186216 provision.go:84] configureAuth start
	I0522 18:52:41.581317  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.597664  186216 provision.go:87] duration metric: took 16.407684ms to configureAuth
	W0522 18:52:41.597683  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.597698  186216 retry.go:31] will retry after 233.403774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.832081  186216 provision.go:84] configureAuth start
	I0522 18:52:41.832195  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:41.848631  186216 provision.go:87] duration metric: took 16.52457ms to configureAuth
	W0522 18:52:41.848648  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:41.848664  186216 retry.go:31] will retry after 441.937889ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:42.291312  186216 provision.go:84] configureAuth start
	I0522 18:52:42.291424  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:42.307006  186216 provision.go:87] duration metric: took 15.669285ms to configureAuth
	W0522 18:52:42.307023  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:42.307037  186216 retry.go:31] will retry after 324.059197ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:42.631451  186216 provision.go:84] configureAuth start
	I0522 18:52:42.631555  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:42.647104  186216 provision.go:87] duration metric: took 15.609431ms to configureAuth
	W0522 18:52:42.647121  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:42.647136  186216 retry.go:31] will retry after 808.323488ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:43.456041  186216 provision.go:84] configureAuth start
	I0522 18:52:43.456133  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:43.471927  186216 provision.go:87] duration metric: took 15.843364ms to configureAuth
	W0522 18:52:43.471946  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:43.471961  186216 retry.go:31] will retry after 1.009868436s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:44.482103  186216 provision.go:84] configureAuth start
	I0522 18:52:44.482175  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:44.498368  186216 provision.go:87] duration metric: took 16.240479ms to configureAuth
	W0522 18:52:44.498386  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:44.498402  186216 retry.go:31] will retry after 1.687872562s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:46.186686  186216 provision.go:84] configureAuth start
	I0522 18:52:46.186802  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:46.202243  186216 provision.go:87] duration metric: took 15.520344ms to configureAuth
	W0522 18:52:46.202260  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:46.202275  186216 retry.go:31] will retry after 1.595494306s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:47.798974  186216 provision.go:84] configureAuth start
	I0522 18:52:47.799105  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:47.815242  186216 provision.go:87] duration metric: took 16.236605ms to configureAuth
	W0522 18:52:47.815260  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:47.815289  186216 retry.go:31] will retry after 2.821798682s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:50.637651  186216 provision.go:84] configureAuth start
	I0522 18:52:50.637755  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:50.654336  186216 provision.go:87] duration metric: took 16.659507ms to configureAuth
	W0522 18:52:50.654363  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:50.654397  186216 retry.go:31] will retry after 7.608977123s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:58.266255  186216 provision.go:84] configureAuth start
	I0522 18:52:58.266381  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:52:58.282319  186216 provision.go:87] duration metric: took 16.033386ms to configureAuth
	W0522 18:52:58.282336  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:52:58.282355  186216 retry.go:31] will retry after 11.423341338s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:09.706683  186216 provision.go:84] configureAuth start
	I0522 18:53:09.706817  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:53:09.723290  186216 provision.go:87] duration metric: took 16.554767ms to configureAuth
	W0522 18:53:09.723311  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:09.723336  186216 retry.go:31] will retry after 11.97123681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:21.696168  186216 provision.go:84] configureAuth start
	I0522 18:53:21.696306  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:53:21.712280  186216 provision.go:87] duration metric: took 16.081472ms to configureAuth
	W0522 18:53:21.712299  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:21.712319  186216 retry.go:31] will retry after 28.609117041s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:50.321994  186216 provision.go:84] configureAuth start
	I0522 18:53:50.322078  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:53:50.338005  186216 provision.go:87] duration metric: took 15.977376ms to configureAuth
	W0522 18:53:50.338027  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:53:50.338046  186216 retry.go:31] will retry after 31.98418587s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.323334  186216 provision.go:84] configureAuth start
	I0522 18:54:22.323432  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	I0522 18:54:22.339673  186216 provision.go:87] duration metric: took 16.310238ms to configureAuth
	W0522 18:54:22.339696  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.339711  186216 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.339719  186216 machine.go:97] duration metric: took 1m41.967061929s to provisionDockerMachine
	I0522 18:54:22.339776  186216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:22.339806  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
	I0522 18:54:22.355838  186216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32922 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
	I0522 18:54:22.435569  186216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:54:22.439484  186216 fix.go:56] duration metric: took 1m42.086661902s for fixHost
	I0522 18:54:22.439506  186216 start.go:83] releasing machines lock for "multinode-737786-m03", held for 1m42.086711234s
	W0522 18:54:22.439603  186216 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.441361  186216 out.go:177] 
	W0522 18:54:22.442369  186216 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:54:22.442380  186216 out.go:239] * 
	* 
	W0522 18:54:22.444827  186216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:54:22.446177  186216 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:284: I0522 18:52:34.499498  186216 out.go:291] Setting OutFile to fd 1 ...
I0522 18:52:34.499788  186216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 18:52:34.499798  186216 out.go:304] Setting ErrFile to fd 2...
I0522 18:52:34.499802  186216 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 18:52:34.499975  186216 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 18:52:34.500200  186216 mustload.go:65] Loading cluster: multinode-737786
I0522 18:52:34.500505  186216 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 18:52:34.500856  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
W0522 18:52:34.516641  186216 host.go:58] "multinode-737786-m03" host status: Stopped
I0522 18:52:34.518693  186216 out.go:177] * Starting "multinode-737786-m03" worker node in "multinode-737786" cluster
I0522 18:52:34.520012  186216 cache.go:121] Beginning downloading kic base image for docker with docker
I0522 18:52:34.521116  186216 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
I0522 18:52:34.522115  186216 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
I0522 18:52:34.522170  186216 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
I0522 18:52:34.522181  186216 cache.go:56] Caching tarball of preloaded images
I0522 18:52:34.522239  186216 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
I0522 18:52:34.522267  186216 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0522 18:52:34.522277  186216 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
I0522 18:52:34.522375  186216 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
I0522 18:52:34.537515  186216 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
I0522 18:52:34.537539  186216 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
I0522 18:52:34.537555  186216 cache.go:194] Successfully downloaded all kic artifacts
I0522 18:52:34.537585  186216 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 18:52:34.537654  186216 start.go:364] duration metric: took 49.187µs to acquireMachinesLock for "multinode-737786-m03"
I0522 18:52:34.537671  186216 start.go:96] Skipping create...Using existing machine configuration
I0522 18:52:34.537681  186216 fix.go:54] fixHost starting: m03
I0522 18:52:34.537884  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
I0522 18:52:34.552890  186216 fix.go:112] recreateIfNeeded on multinode-737786-m03: state=Stopped err=<nil>
W0522 18:52:34.552923  186216 fix.go:138] unexpected machine state, will restart: <nil>
I0522 18:52:34.554600  186216 out.go:177] * Restarting existing docker container for "multinode-737786-m03" ...
I0522 18:52:34.555722  186216 cli_runner.go:164] Run: docker start multinode-737786-m03
I0522 18:52:34.843032  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
I0522 18:52:34.860488  186216 kic.go:430] container "multinode-737786-m03" state is running.
I0522 18:52:34.860861  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:34.878795  186216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 18:52:34.878846  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
I0522 18:52:34.895329  186216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32922 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
W0522 18:52:34.896182  186216 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:60120->127.0.0.1:32922: read: connection reset by peer
I0522 18:52:34.896211  186216 retry.go:31] will retry after 280.923979ms: ssh: handshake failed: read tcp 127.0.0.1:60120->127.0.0.1:32922: read: connection reset by peer
I0522 18:52:35.347503  186216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 18:52:35.351845  186216 fix.go:56] duration metric: took 814.159468ms for fixHost
I0522 18:52:35.351870  186216 start.go:83] releasing machines lock for "multinode-737786-m03", held for 814.20437ms
W0522 18:52:35.351887  186216 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
W0522 18:52:35.351950  186216 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
I0522 18:52:35.351964  186216 start.go:728] Will try again in 5 seconds ...
I0522 18:52:40.352666  186216 start.go:360] acquireMachinesLock for multinode-737786-m03: {Name:mk1ab0dc50e34cae21563ba34f13025bd2451afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0522 18:52:40.352781  186216 start.go:364] duration metric: took 77.93µs to acquireMachinesLock for "multinode-737786-m03"
I0522 18:52:40.352808  186216 start.go:96] Skipping create...Using existing machine configuration
I0522 18:52:40.352839  186216 fix.go:54] fixHost starting: m03
I0522 18:52:40.353169  186216 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
I0522 18:52:40.369496  186216 fix.go:112] recreateIfNeeded on multinode-737786-m03: state=Running err=<nil>
W0522 18:52:40.369520  186216 fix.go:138] unexpected machine state, will restart: <nil>
I0522 18:52:40.371369  186216 out.go:177] * Updating the running docker "multinode-737786-m03" container ...
I0522 18:52:40.372645  186216 machine.go:94] provisionDockerMachine start ...
I0522 18:52:40.372706  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
I0522 18:52:40.388430  186216 main.go:141] libmachine: Using SSH client type: native
I0522 18:52:40.388613  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
I0522 18:52:40.388624  186216 main.go:141] libmachine: About to run SSH command:
hostname
I0522 18:52:40.498577  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03

                                                
                                                
I0522 18:52:40.498604  186216 ubuntu.go:169] provisioning hostname "multinode-737786-m03"
I0522 18:52:40.498680  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
I0522 18:52:40.514753  186216 main.go:141] libmachine: Using SSH client type: native
I0522 18:52:40.514914  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
I0522 18:52:40.514927  186216 main.go:141] libmachine: About to run SSH command:
sudo hostname multinode-737786-m03 && echo "multinode-737786-m03" | sudo tee /etc/hostname
I0522 18:52:40.637071  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m03

                                                
                                                
I0522 18:52:40.637147  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
I0522 18:52:40.653594  186216 main.go:141] libmachine: Using SSH client type: native
I0522 18:52:40.653761  186216 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32922 <nil> <nil>}
I0522 18:52:40.653779  186216 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\smultinode-737786-m03' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m03/g' /etc/hosts;
			else 
				echo '127.0.1.1 multinode-737786-m03' | sudo tee -a /etc/hosts; 
			fi
		fi
I0522 18:52:40.767123  186216 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0522 18:52:40.767157  186216 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
I0522 18:52:40.767191  186216 ubuntu.go:177] setting up certificates
I0522 18:52:40.767210  186216 provision.go:84] configureAuth start
I0522 18:52:40.767305  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.783123  186216 provision.go:87] duration metric: took 15.898551ms to configureAuth
W0522 18:52:40.783147  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.783165  186216 retry.go:31] will retry after 92.722µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.784289  186216 provision.go:84] configureAuth start
I0522 18:52:40.784358  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.800361  186216 provision.go:87] duration metric: took 16.050976ms to configureAuth
W0522 18:52:40.800377  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.800390  186216 retry.go:31] will retry after 109.642µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.801503  186216 provision.go:84] configureAuth start
I0522 18:52:40.801562  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.816440  186216 provision.go:87] duration metric: took 14.920636ms to configureAuth
W0522 18:52:40.816456  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.816470  186216 retry.go:31] will retry after 132.251µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.817571  186216 provision.go:84] configureAuth start
I0522 18:52:40.817621  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.832569  186216 provision.go:87] duration metric: took 14.981286ms to configureAuth
W0522 18:52:40.832587  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.832602  186216 retry.go:31] will retry after 281.402µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.833743  186216 provision.go:84] configureAuth start
I0522 18:52:40.833817  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.848761  186216 provision.go:87] duration metric: took 14.998295ms to configureAuth
W0522 18:52:40.848777  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.848792  186216 retry.go:31] will retry after 429.618µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.849900  186216 provision.go:84] configureAuth start
I0522 18:52:40.849949  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.864777  186216 provision.go:87] duration metric: took 14.860951ms to configureAuth
W0522 18:52:40.864791  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.864806  186216 retry.go:31] will retry after 534.658µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.865913  186216 provision.go:84] configureAuth start
I0522 18:52:40.865962  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.880844  186216 provision.go:87] duration metric: took 14.915901ms to configureAuth
W0522 18:52:40.880861  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.880875  186216 retry.go:31] will retry after 611.036µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.881982  186216 provision.go:84] configureAuth start
I0522 18:52:40.882037  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.897110  186216 provision.go:87] duration metric: took 15.110213ms to configureAuth
W0522 18:52:40.897131  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.897148  186216 retry.go:31] will retry after 1.54457ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.899319  186216 provision.go:84] configureAuth start
I0522 18:52:40.899372  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.914235  186216 provision.go:87] duration metric: took 14.900151ms to configureAuth
W0522 18:52:40.914251  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.914268  186216 retry.go:31] will retry after 2.780469ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.917442  186216 provision.go:84] configureAuth start
I0522 18:52:40.917493  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.932538  186216 provision.go:87] duration metric: took 15.080164ms to configureAuth
W0522 18:52:40.932555  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.932570  186216 retry.go:31] will retry after 5.701542ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.938768  186216 provision.go:84] configureAuth start
I0522 18:52:40.938834  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.953194  186216 provision.go:87] duration metric: took 14.409025ms to configureAuth
W0522 18:52:40.953210  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.953223  186216 retry.go:31] will retry after 8.510158ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.962399  186216 provision.go:84] configureAuth start
I0522 18:52:40.962453  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.976776  186216 provision.go:87] duration metric: took 14.360556ms to configureAuth
W0522 18:52:40.976792  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.976806  186216 retry.go:31] will retry after 5.736299ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.983002  186216 provision.go:84] configureAuth start
I0522 18:52:40.983052  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:40.998265  186216 provision.go:87] duration metric: took 15.24594ms to configureAuth
W0522 18:52:40.998286  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:40.998303  186216 retry.go:31] will retry after 10.172875ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.009482  186216 provision.go:84] configureAuth start
I0522 18:52:41.009558  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.025079  186216 provision.go:87] duration metric: took 15.578075ms to configureAuth
W0522 18:52:41.025095  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.025108  186216 retry.go:31] will retry after 27.142951ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.053296  186216 provision.go:84] configureAuth start
I0522 18:52:41.053364  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.069551  186216 provision.go:87] duration metric: took 16.232787ms to configureAuth
W0522 18:52:41.069573  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.069590  186216 retry.go:31] will retry after 31.486219ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.101802  186216 provision.go:84] configureAuth start
I0522 18:52:41.101887  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.117772  186216 provision.go:87] duration metric: took 15.942306ms to configureAuth
W0522 18:52:41.117790  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.117813  186216 retry.go:31] will retry after 50.369704ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.169019  186216 provision.go:84] configureAuth start
I0522 18:52:41.169105  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.184538  186216 provision.go:87] duration metric: took 15.49306ms to configureAuth
W0522 18:52:41.184557  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.184571  186216 retry.go:31] will retry after 75.82481ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.260804  186216 provision.go:84] configureAuth start
I0522 18:52:41.260876  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.277104  186216 provision.go:87] duration metric: took 16.273529ms to configureAuth
W0522 18:52:41.277125  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.277140  186216 retry.go:31] will retry after 101.284492ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.379392  186216 provision.go:84] configureAuth start
I0522 18:52:41.379493  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.395845  186216 provision.go:87] duration metric: took 16.426966ms to configureAuth
W0522 18:52:41.395865  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.395880  186216 retry.go:31] will retry after 184.903748ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.581229  186216 provision.go:84] configureAuth start
I0522 18:52:41.581317  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.597664  186216 provision.go:87] duration metric: took 16.407684ms to configureAuth
W0522 18:52:41.597683  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.597698  186216 retry.go:31] will retry after 233.403774ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.832081  186216 provision.go:84] configureAuth start
I0522 18:52:41.832195  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:41.848631  186216 provision.go:87] duration metric: took 16.52457ms to configureAuth
W0522 18:52:41.848648  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:41.848664  186216 retry.go:31] will retry after 441.937889ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:42.291312  186216 provision.go:84] configureAuth start
I0522 18:52:42.291424  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:42.307006  186216 provision.go:87] duration metric: took 15.669285ms to configureAuth
W0522 18:52:42.307023  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:42.307037  186216 retry.go:31] will retry after 324.059197ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:42.631451  186216 provision.go:84] configureAuth start
I0522 18:52:42.631555  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:42.647104  186216 provision.go:87] duration metric: took 15.609431ms to configureAuth
W0522 18:52:42.647121  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:42.647136  186216 retry.go:31] will retry after 808.323488ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:43.456041  186216 provision.go:84] configureAuth start
I0522 18:52:43.456133  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:43.471927  186216 provision.go:87] duration metric: took 15.843364ms to configureAuth
W0522 18:52:43.471946  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:43.471961  186216 retry.go:31] will retry after 1.009868436s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:44.482103  186216 provision.go:84] configureAuth start
I0522 18:52:44.482175  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:44.498368  186216 provision.go:87] duration metric: took 16.240479ms to configureAuth
W0522 18:52:44.498386  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:44.498402  186216 retry.go:31] will retry after 1.687872562s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:46.186686  186216 provision.go:84] configureAuth start
I0522 18:52:46.186802  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:46.202243  186216 provision.go:87] duration metric: took 15.520344ms to configureAuth
W0522 18:52:46.202260  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:46.202275  186216 retry.go:31] will retry after 1.595494306s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:47.798974  186216 provision.go:84] configureAuth start
I0522 18:52:47.799105  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:47.815242  186216 provision.go:87] duration metric: took 16.236605ms to configureAuth
W0522 18:52:47.815260  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:47.815289  186216 retry.go:31] will retry after 2.821798682s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:50.637651  186216 provision.go:84] configureAuth start
I0522 18:52:50.637755  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:50.654336  186216 provision.go:87] duration metric: took 16.659507ms to configureAuth
W0522 18:52:50.654363  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:50.654397  186216 retry.go:31] will retry after 7.608977123s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:58.266255  186216 provision.go:84] configureAuth start
I0522 18:52:58.266381  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:52:58.282319  186216 provision.go:87] duration metric: took 16.033386ms to configureAuth
W0522 18:52:58.282336  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:52:58.282355  186216 retry.go:31] will retry after 11.423341338s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:09.706683  186216 provision.go:84] configureAuth start
I0522 18:53:09.706817  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:53:09.723290  186216 provision.go:87] duration metric: took 16.554767ms to configureAuth
W0522 18:53:09.723311  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:09.723336  186216 retry.go:31] will retry after 11.97123681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:21.696168  186216 provision.go:84] configureAuth start
I0522 18:53:21.696306  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:53:21.712280  186216 provision.go:87] duration metric: took 16.081472ms to configureAuth
W0522 18:53:21.712299  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:21.712319  186216 retry.go:31] will retry after 28.609117041s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:50.321994  186216 provision.go:84] configureAuth start
I0522 18:53:50.322078  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:53:50.338005  186216 provision.go:87] duration metric: took 15.977376ms to configureAuth
W0522 18:53:50.338027  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:53:50.338046  186216 retry.go:31] will retry after 31.98418587s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:54:22.323334  186216 provision.go:84] configureAuth start
I0522 18:54:22.323432  186216 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
I0522 18:54:22.339673  186216 provision.go:87] duration metric: took 16.310238ms to configureAuth
W0522 18:54:22.339696  186216 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:54:22.339711  186216 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:54:22.339719  186216 machine.go:97] duration metric: took 1m41.967061929s to provisionDockerMachine
I0522 18:54:22.339776  186216 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
I0522 18:54:22.339806  186216 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m03
I0522 18:54:22.355838  186216 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32922 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m03/id_rsa Username:docker}
I0522 18:54:22.435569  186216 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
I0522 18:54:22.439484  186216 fix.go:56] duration metric: took 1m42.086661902s for fixHost
I0522 18:54:22.439506  186216 start.go:83] releasing machines lock for "multinode-737786-m03", held for 1m42.086711234s
W0522 18:54:22.439603  186216 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
I0522 18:54:22.441361  186216 out.go:177] 
W0522 18:54:22.442369  186216 out.go:239] X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
X Exiting due to GUEST_NODE_PROVISION: provisioning host for node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
W0522 18:54:22.442380  186216 out.go:239] * 
* 
W0522 18:54:22.444827  186216 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_node_1c3a1297795327375b61f3ff5a4ef34c9b2fc69b_0.log                    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0522 18:54:22.446177  186216 out.go:177] 
multinode_test.go:285: node start returned an error. args "out/minikube-linux-amd64 -p multinode-737786 node start m03 -v=7 --alsologtostderr": exit status 80
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (317.937466ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:22.491909  188104 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:22.492155  188104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:22.492163  188104 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:22.492167  188104 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:22.492322  188104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:22.492462  188104 out.go:298] Setting JSON to false
	I0522 18:54:22.492487  188104 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:22.492580  188104 notify.go:220] Checking for updates...
	I0522 18:54:22.492849  188104 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:22.492865  188104 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:22.493303  188104 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:22.510249  188104 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:22.510284  188104 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:22.510589  188104 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:22.525495  188104 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:22.525696  188104 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:22.525736  188104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:22.540664  188104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:22.619943  188104 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:22.623456  188104 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:22.633149  188104 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:22.678204  188104 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:22.669787757 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:22.678716  188104 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:22.678741  188104 api_server.go:166] Checking apiserver status ...
	I0522 18:54:22.678775  188104 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:22.689050  188104 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:22.696963  188104 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:22.697022  188104 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:22.704067  188104 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:22.704090  188104 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:22.707490  188104 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:22.707509  188104 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:22.707518  188104 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:22.707532  188104 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:22.707745  188104 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:22.723483  188104 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:22.723504  188104 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:22.723762  188104 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:22.738135  188104 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.738166  188104 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:22.738178  188104 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.738184  188104 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:22.738426  188104 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:22.752699  188104 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:22.752732  188104 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:22.752955  188104 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:22.768692  188104 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:22.768711  188104 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:22.768724  188104 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (318.116045ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:23.442422  188247 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:23.442693  188247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:23.442702  188247 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:23.442707  188247 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:23.442867  188247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:23.443010  188247 out.go:298] Setting JSON to false
	I0522 18:54:23.443037  188247 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:23.443154  188247 notify.go:220] Checking for updates...
	I0522 18:54:23.443490  188247 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:23.443512  188247 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:23.443997  188247 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:23.460495  188247 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:23.460517  188247 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:23.460736  188247 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:23.475503  188247 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:23.475705  188247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:23.475747  188247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:23.491140  188247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:23.571767  188247 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:23.575328  188247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:23.585349  188247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:23.629326  188247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:23.620506138 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:23.630025  188247 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:23.630055  188247 api_server.go:166] Checking apiserver status ...
	I0522 18:54:23.630083  188247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:23.640265  188247 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:23.648367  188247 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:23.648418  188247 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:23.655900  188247 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:23.655924  188247 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:23.660229  188247 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:23.660247  188247 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:23.660257  188247 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:23.660275  188247 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:23.660542  188247 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:23.676350  188247 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:23.676370  188247 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:23.676590  188247 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:23.691963  188247 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:23.691992  188247 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:23.692006  188247 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:23.692011  188247 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:23.692226  188247 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:23.707754  188247 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:23.707776  188247 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:23.708021  188247 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:23.722181  188247 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:23.722200  188247 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:23.722213  188247 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (318.383124ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:25.478123  188416 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:25.478350  188416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:25.478359  188416 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:25.478362  188416 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:25.478552  188416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:25.478705  188416 out.go:298] Setting JSON to false
	I0522 18:54:25.478730  188416 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:25.478782  188416 notify.go:220] Checking for updates...
	I0522 18:54:25.479026  188416 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:25.479041  188416 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:25.479476  188416 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:25.496461  188416 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:25.496487  188416 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:25.496707  188416 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:25.511575  188416 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:25.511826  188416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:25.511861  188416 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:25.527072  188416 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:25.607875  188416 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:25.611406  188416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:25.620942  188416 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:25.665302  188416 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:25.656860893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:25.665771  188416 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:25.665795  188416 api_server.go:166] Checking apiserver status ...
	I0522 18:54:25.665831  188416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:25.676498  188416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:25.684598  188416 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:25.684640  188416 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:25.691720  188416 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:25.691741  188416 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:25.695181  188416 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:25.695199  188416 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:25.695208  188416 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:25.695221  188416 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:25.695471  188416 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:25.710942  188416 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:25.710961  188416 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:25.711156  188416 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:25.727676  188416 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:25.727695  188416 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:25.727707  188416 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:25.727713  188416 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:25.727929  188416 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:25.742993  188416 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:25.743013  188416 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:25.743220  188416 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:25.757992  188416 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:25.758011  188416 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:25.758025  188416 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (321.49349ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:28.327031  188553 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:28.327303  188553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:28.327316  188553 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:28.327322  188553 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:28.327493  188553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:28.327636  188553 out.go:298] Setting JSON to false
	I0522 18:54:28.327659  188553 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:28.327764  188553 notify.go:220] Checking for updates...
	I0522 18:54:28.327982  188553 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:28.328006  188553 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:28.328361  188553 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:28.348639  188553 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:28.348661  188553 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:28.348848  188553 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:28.364663  188553 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:28.364873  188553 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:28.364913  188553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:28.380189  188553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:28.459890  188553 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:28.463646  188553 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:28.473449  188553 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:28.518168  188553 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:28.509605897 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:28.518690  188553 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:28.518718  188553 api_server.go:166] Checking apiserver status ...
	I0522 18:54:28.518756  188553 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:28.528942  188553 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:28.537022  188553 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:28.537075  188553 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:28.544168  188553 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:28.544186  188553 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:28.547722  188553 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:28.547741  188553 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:28.547753  188553 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:28.547788  188553 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:28.548021  188553 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:28.563913  188553 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:28.563931  188553 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:28.564166  188553 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:28.578958  188553 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:28.578978  188553 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:28.578990  188553 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:28.578997  188553 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:28.579202  188553 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:28.593995  188553 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:28.594014  188553 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:28.594230  188553 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:28.609612  188553 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:28.609632  188553 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:28.609645  188553 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (322.818081ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:31.485425  188716 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:31.485676  188716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:31.485687  188716 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:31.485693  188716 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:31.485866  188716 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:31.486028  188716 out.go:298] Setting JSON to false
	I0522 18:54:31.486056  188716 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:31.486153  188716 notify.go:220] Checking for updates...
	I0522 18:54:31.486418  188716 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:31.486435  188716 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:31.486817  188716 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:31.503338  188716 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:31.503357  188716 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:31.503554  188716 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:31.519567  188716 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:31.519777  188716 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:31.519810  188716 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:31.534274  188716 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:31.615904  188716 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:31.619785  188716 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:31.629429  188716 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:31.677233  188716 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:31.668311043 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:31.677762  188716 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:31.677790  188716 api_server.go:166] Checking apiserver status ...
	I0522 18:54:31.677830  188716 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:31.688436  188716 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:31.696660  188716 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:31.696719  188716 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:31.704101  188716 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:31.704128  188716 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:31.707635  188716 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:31.707653  188716 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:31.707662  188716 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:31.707676  188716 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:31.707918  188716 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:31.724145  188716 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:31.724166  188716 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:31.724412  188716 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:31.739814  188716 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:31.739842  188716 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:31.739862  188716 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:31.739868  188716 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:31.740163  188716 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:31.754818  188716 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:31.754837  188716 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:31.755062  188716 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:31.769425  188716 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:31.769445  188716 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:31.769458  188716 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (330.616955ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:35.011111  188873 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:35.011386  188873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:35.011396  188873 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:35.011400  188873 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:35.011589  188873 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:35.011737  188873 out.go:298] Setting JSON to false
	I0522 18:54:35.011761  188873 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:35.011881  188873 notify.go:220] Checking for updates...
	I0522 18:54:35.012205  188873 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:35.012223  188873 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:35.012678  188873 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:35.030816  188873 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:35.030839  188873 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:35.031154  188873 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:35.046666  188873 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:35.046876  188873 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:35.046915  188873 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:35.062178  188873 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:35.148146  188873 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:35.152493  188873 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:35.162237  188873 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:35.210430  188873 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:35.201229159 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:35.210976  188873 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:35.211008  188873 api_server.go:166] Checking apiserver status ...
	I0522 18:54:35.211035  188873 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:35.221365  188873 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:35.229506  188873 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:35.229578  188873 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:35.236918  188873 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:35.236939  188873 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:35.241174  188873 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:35.241194  188873 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:35.241204  188873 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:35.241234  188873 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:35.241477  188873 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:35.257974  188873 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:35.257994  188873 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:35.258241  188873 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:35.273573  188873 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:35.273600  188873 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:35.273616  188873 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:35.273630  188873 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:35.273854  188873 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:35.288419  188873 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:35.288439  188873 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:35.288668  188873 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:35.303324  188873 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:35.303346  188873 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:35.303365  188873 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (324.514348ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:42.928641  189058 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:42.928777  189058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:42.928790  189058 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:42.928796  189058 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:42.929012  189058 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:42.929169  189058 out.go:298] Setting JSON to false
	I0522 18:54:42.929194  189058 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:42.929312  189058 notify.go:220] Checking for updates...
	I0522 18:54:42.929656  189058 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:42.929676  189058 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:42.930154  189058 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:42.946913  189058 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:42.946944  189058 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:42.947238  189058 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:42.962501  189058 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:42.962762  189058 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:42.962822  189058 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:42.978765  189058 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:43.060079  189058 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:43.063932  189058 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:43.073573  189058 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:43.118849  189058 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:43.10959673 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:43.119405  189058 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:43.119434  189058 api_server.go:166] Checking apiserver status ...
	I0522 18:54:43.119463  189058 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:43.129944  189058 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:43.138545  189058 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:43.138595  189058 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:43.146190  189058 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:43.146215  189058 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:43.151100  189058 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:43.151119  189058 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:43.151129  189058 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:43.151143  189058 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:43.151398  189058 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:43.168003  189058 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:43.168020  189058 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:43.168241  189058 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:43.183485  189058 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:43.183522  189058 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:43.183547  189058 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:43.183556  189058 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:43.183844  189058 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:43.199638  189058 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:43.199658  189058 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:43.199916  189058 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:43.214753  189058 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:43.214780  189058 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:43.214793  189058 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (326.632865ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:54:59.051026  189284 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:54:59.051306  189284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:59.051316  189284 out.go:304] Setting ErrFile to fd 2...
	I0522 18:54:59.051321  189284 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:54:59.051478  189284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:54:59.051644  189284 out.go:298] Setting JSON to false
	I0522 18:54:59.051668  189284 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:54:59.051705  189284 notify.go:220] Checking for updates...
	I0522 18:54:59.051960  189284 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:54:59.051973  189284 status.go:255] checking status of multinode-737786 ...
	I0522 18:54:59.052315  189284 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:54:59.069116  189284 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:54:59.069156  189284 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:59.069481  189284 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:54:59.085221  189284 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:54:59.085417  189284 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:54:59.085467  189284 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:54:59.100854  189284 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:54:59.184035  189284 ssh_runner.go:195] Run: systemctl --version
	I0522 18:54:59.187854  189284 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:54:59.198104  189284 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:54:59.245841  189284 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:54:59.236611616 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:54:59.246366  189284 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:54:59.246393  189284 api_server.go:166] Checking apiserver status ...
	I0522 18:54:59.246420  189284 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:54:59.256684  189284 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:54:59.264933  189284 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:54:59.264998  189284 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:54:59.272261  189284 api_server.go:204] freezer state: "THAWED"
	I0522 18:54:59.272299  189284 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:54:59.276411  189284 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:54:59.276430  189284 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:54:59.276439  189284 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:54:59.276453  189284 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:54:59.276660  189284 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:54:59.293348  189284 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:54:59.293367  189284 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:54:59.293596  189284 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:54:59.308875  189284 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:59.308906  189284 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:59.308918  189284 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:59.308924  189284 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:54:59.309155  189284 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:54:59.323761  189284 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:54:59.323781  189284 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:54:59.323996  189284 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:54:59.338703  189284 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:54:59.338721  189284 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:54:59.338734  189284 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr: exit status 7 (330.664692ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:55:15.011565  189517 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:55:15.011818  189517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:15.011826  189517 out.go:304] Setting ErrFile to fd 2...
	I0522 18:55:15.011831  189517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:15.012029  189517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:55:15.012198  189517 out.go:298] Setting JSON to false
	I0522 18:55:15.012223  189517 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:55:15.012315  189517 notify.go:220] Checking for updates...
	I0522 18:55:15.012528  189517 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:15.012542  189517 status.go:255] checking status of multinode-737786 ...
	I0522 18:55:15.012953  189517 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:15.031753  189517 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:55:15.031797  189517 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:15.032061  189517 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:15.047819  189517 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:15.048036  189517 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:15.048084  189517 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:15.063435  189517 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:15.144191  189517 ssh_runner.go:195] Run: systemctl --version
	I0522 18:55:15.148136  189517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:55:15.158990  189517 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:15.208702  189517 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:55:15.200074298 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:15.209187  189517 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:55:15.209212  189517 api_server.go:166] Checking apiserver status ...
	I0522 18:55:15.209239  189517 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:55:15.219615  189517 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:55:15.227705  189517 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:55:15.227753  189517 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:55:15.234954  189517 api_server.go:204] freezer state: "THAWED"
	I0522 18:55:15.234977  189517 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:15.239242  189517 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:55:15.239260  189517 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:55:15.239302  189517 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:55:15.239331  189517 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:55:15.239536  189517 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:15.255506  189517 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:55:15.255528  189517 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:55:15.255763  189517 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:55:15.271126  189517 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:55:15.271155  189517 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:55:15.271167  189517 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:55:15.271173  189517 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:55:15.271432  189517 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:55:15.286512  189517 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:55:15.286534  189517 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:55:15.286794  189517 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:55:15.302683  189517 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:55:15.302710  189517 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:55:15.302733  189517 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
multinode_test.go:294: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-737786 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 161569,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:32:24.061487531Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0f033da40320ba3759bccac938ed954a52e8591012b592a9d459eac191ead142",
	            "SandboxKey": "/var/run/docker/netns/0f033da40320",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32897"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32896"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32893"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32895"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32894"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "0dc537a1f234204c25e41871b0c1dd246d8d646b8557cafc1f206a6312a58796",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/StartAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StartAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/StartAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-737786 cp multinode-737786:/home/docker/cp-test.txt                           | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786_multinode-737786-m03.txt     |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786 sudo cat                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786_multinode-737786-m03.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-737786 node stop m03                                                          | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	| node    | multinode-737786 node start                                                             | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:32:18
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:32:18.820070  160939 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:32:18.820158  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820166  160939 out.go:304] Setting ErrFile to fd 2...
	I0522 18:32:18.820169  160939 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:32:18.820356  160939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:32:18.820906  160939 out.go:298] Setting JSON to false
	I0522 18:32:18.821847  160939 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":4483,"bootTime":1716398256,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:32:18.821903  160939 start.go:139] virtualization: kvm guest
	I0522 18:32:18.825068  160939 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:32:18.826450  160939 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:32:18.826451  160939 notify.go:220] Checking for updates...
	I0522 18:32:18.827917  160939 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:32:18.829159  160939 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:18.830471  160939 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:32:18.832039  160939 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:32:18.833509  160939 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:32:18.835235  160939 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:32:18.856978  160939 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:32:18.857075  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.904065  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.895172586 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.904163  160939 docker.go:295] overlay module found
	I0522 18:32:18.906205  160939 out.go:177] * Using the docker driver based on user configuration
	I0522 18:32:18.907716  160939 start.go:297] selected driver: docker
	I0522 18:32:18.907745  160939 start.go:901] validating driver "docker" against <nil>
	I0522 18:32:18.907759  160939 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:32:18.908486  160939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:32:18.953709  160939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:32:18.945190998 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:32:18.953883  160939 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 18:32:18.954091  160939 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:32:18.956247  160939 out.go:177] * Using Docker driver with root privileges
	I0522 18:32:18.957858  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:18.957878  160939 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0522 18:32:18.957886  160939 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0522 18:32:18.957966  160939 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuth
Sock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:18.959670  160939 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:32:18.961220  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:32:18.962715  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:32:18.964248  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:18.964293  160939 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:32:18.964303  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:32:18.964344  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:32:18.964398  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:32:18.964409  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:32:18.964718  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:18.964741  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json: {Name:mk43b46af9c3b0b30bdffa978db6463aacef7d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:18.980726  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:32:18.980763  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:32:18.980786  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:32:18.980821  160939 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:32:18.980939  160939 start.go:364] duration metric: took 90.565µs to acquireMachinesLock for "multinode-737786"
	I0522 18:32:18.980970  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:18.981093  160939 start.go:125] createHost starting for "" (driver="docker")
	I0522 18:32:18.983462  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:32:18.983714  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:32:18.983748  160939 client.go:168] LocalClient.Create starting
	I0522 18:32:18.983834  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:32:18.983868  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983888  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.983948  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:32:18.983967  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:32:18.983980  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:32:18.984396  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0522 18:32:18.999077  160939 cli_runner.go:211] docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0522 18:32:18.999133  160939 network_create.go:281] running [docker network inspect multinode-737786] to gather additional debugging logs...
	I0522 18:32:18.999152  160939 cli_runner.go:164] Run: docker network inspect multinode-737786
	W0522 18:32:19.013736  160939 cli_runner.go:211] docker network inspect multinode-737786 returned with exit code 1
	I0522 18:32:19.013763  160939 network_create.go:284] error running [docker network inspect multinode-737786]: docker network inspect multinode-737786: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network multinode-737786 not found
	I0522 18:32:19.013789  160939 network_create.go:286] output of [docker network inspect multinode-737786]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network multinode-737786 not found
	
	** /stderr **
	I0522 18:32:19.013898  160939 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:19.029452  160939 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-638c6f0967c1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:c7:dc:4f:16} reservation:<nil>}
	I0522 18:32:19.029912  160939 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbcc438b661e IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:7a:35:35:2f} reservation:<nil>}
	I0522 18:32:19.030359  160939 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a34820}
	I0522 18:32:19.030382  160939 network_create.go:124] attempt to create docker network multinode-737786 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0522 18:32:19.030423  160939 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=multinode-737786 multinode-737786
	I0522 18:32:19.080955  160939 network_create.go:108] docker network multinode-737786 192.168.67.0/24 created
	I0522 18:32:19.080984  160939 kic.go:121] calculated static IP "192.168.67.2" for the "multinode-737786" container
	I0522 18:32:19.081036  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:32:19.095483  160939 cli_runner.go:164] Run: docker volume create multinode-737786 --label name.minikube.sigs.k8s.io=multinode-737786 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:32:19.111371  160939 oci.go:103] Successfully created a docker volume multinode-737786
	I0522 18:32:19.111438  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --entrypoint /usr/bin/test -v multinode-737786:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:32:19.598377  160939 oci.go:107] Successfully prepared a docker volume multinode-737786
	I0522 18:32:19.598412  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:19.598430  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:32:19.598501  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:32:23.741449  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.142877958s)
	I0522 18:32:23.741484  160939 kic.go:203] duration metric: took 4.14304939s to extract preloaded images to volume ...
	W0522 18:32:23.741633  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:32:23.741756  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:32:23.786059  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786 --name multinode-737786 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786 --network multinode-737786 --ip 192.168.67.2 --volume multinode-737786:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:32:24.069142  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Running}}
	I0522 18:32:24.086344  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.103978  160939 cli_runner.go:164] Run: docker exec multinode-737786 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:32:24.141807  160939 oci.go:144] the created container "multinode-737786" has a running status.
	I0522 18:32:24.141842  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa...
	I0522 18:32:24.342469  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:32:24.342509  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:32:24.363722  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.383810  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:32:24.383841  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:32:24.455784  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:24.474782  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:32:24.474871  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.497547  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.497754  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.497767  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:32:24.698482  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.698509  160939 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:32:24.698565  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.715252  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.715478  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.715502  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:32:24.840636  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:32:24.840711  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:24.857900  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:24.858096  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:24.858117  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:32:24.967023  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:32:24.967068  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:32:24.967091  160939 ubuntu.go:177] setting up certificates
	I0522 18:32:24.967102  160939 provision.go:84] configureAuth start
	I0522 18:32:24.967154  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:24.983423  160939 provision.go:143] copyHostCerts
	I0522 18:32:24.983455  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983479  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:32:24.983485  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:32:24.983549  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:32:24.983615  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983633  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:32:24.983640  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:32:24.983665  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:32:24.983708  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983723  160939 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:32:24.983730  160939 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:32:24.983749  160939 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:32:24.983796  160939 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:32:25.113895  160939 provision.go:177] copyRemoteCerts
	I0522 18:32:25.113964  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:32:25.113999  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.130480  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:25.215072  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:32:25.215123  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:32:25.235444  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:32:25.235498  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:32:25.255313  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:32:25.255360  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:32:25.275241  160939 provision.go:87] duration metric: took 308.123688ms to configureAuth
	I0522 18:32:25.275280  160939 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:32:25.275447  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:25.275493  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.291597  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.291797  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.291813  160939 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:32:25.403199  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:32:25.403222  160939 ubuntu.go:71] root file system type: overlay
	I0522 18:32:25.403368  160939 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:32:25.403417  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.419508  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.419684  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.419742  160939 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:32:25.540991  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:32:25.541068  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:25.556804  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:32:25.556997  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32897 <nil> <nil>}
	I0522 18:32:25.557016  160939 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:32:26.182116  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2024-05-08 13:59:39.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2024-05-22 18:32:25.538581939 +0000
	@@ -1,46 +1,49 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	-After=network-online.target docker.socket firewalld.service containerd.service time-set.target
	-Wants=network-online.target containerd.service
	+BindsTo=containerd.service
	+After=network-online.target firewalld.service containerd.service
	+Wants=network-online.target
	 Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	-Restart=always
	+Restart=on-failure
	 
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0522 18:32:26.182148  160939 machine.go:97] duration metric: took 1.707347407s to provisionDockerMachine
	I0522 18:32:26.182160  160939 client.go:171] duration metric: took 7.198404279s to LocalClient.Create
	I0522 18:32:26.182176  160939 start.go:167] duration metric: took 7.198463255s to libmachine.API.Create "multinode-737786"
	I0522 18:32:26.182182  160939 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:32:26.182195  160939 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:32:26.182267  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:32:26.182301  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.198446  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.283412  160939 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:32:26.286206  160939 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:32:26.286222  160939 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:32:26.286230  160939 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:32:26.286238  160939 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:32:26.286245  160939 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:32:26.286252  160939 command_runner.go:130] > ID=ubuntu
	I0522 18:32:26.286258  160939 command_runner.go:130] > ID_LIKE=debian
	I0522 18:32:26.286280  160939 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:32:26.286291  160939 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:32:26.286302  160939 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:32:26.286317  160939 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:32:26.286328  160939 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:32:26.286376  160939 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:32:26.286410  160939 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:32:26.286428  160939 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:32:26.286440  160939 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:32:26.286455  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:32:26.286505  160939 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:32:26.286590  160939 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:32:26.286602  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:32:26.286703  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:32:26.294122  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:26.314177  160939 start.go:296] duration metric: took 131.985031ms for postStartSetup
	I0522 18:32:26.314484  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.329734  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:32:26.329958  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:32:26.329996  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.344674  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.423242  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:32:26.423479  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:32:26.427170  160939 command_runner.go:130] > 215G
	I0522 18:32:26.427358  160939 start.go:128] duration metric: took 7.446253482s to createHost
	I0522 18:32:26.427380  160939 start.go:83] releasing machines lock for "multinode-737786", held for 7.446425308s
	I0522 18:32:26.427450  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:32:26.442825  160939 ssh_runner.go:195] Run: cat /version.json
	I0522 18:32:26.442867  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.442937  160939 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:32:26.443009  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:26.459148  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.459626  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:26.615027  160939 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:32:26.615123  160939 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:32:26.615168  160939 ssh_runner.go:195] Run: systemctl --version
	I0522 18:32:26.618922  160939 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:32:26.618954  160939 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:32:26.619096  160939 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:32:26.622539  160939 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:32:26.622555  160939 command_runner.go:130] >   Size: 54        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:32:26.622561  160939 command_runner.go:130] > Device: 37h/55d	Inode: 803930      Links: 1
	I0522 18:32:26.622567  160939 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:26.622576  160939 command_runner.go:130] > Access: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622584  160939 command_runner.go:130] > Modify: 2023-06-14 14:44:50.000000000 +0000
	I0522 18:32:26.622592  160939 command_runner.go:130] > Change: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622604  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:12.885782987 +0000
	I0522 18:32:26.622753  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:32:26.643532  160939 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:32:26.643591  160939 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:32:26.666889  160939 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf, 
	I0522 18:32:26.666926  160939 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0522 18:32:26.666940  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.666967  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.667076  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.679769  160939 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:32:26.680589  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:32:26.688804  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:32:26.696790  160939 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:32:26.696843  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:32:26.705063  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.713131  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:32:26.721185  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:32:26.729165  160939 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:32:26.736590  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:32:26.744755  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:32:26.752531  160939 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:32:26.760599  160939 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:32:26.767562  160939 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:32:26.767615  160939 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:32:26.774559  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:26.839033  160939 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:32:26.926529  160939 start.go:494] detecting cgroup driver to use...
	I0522 18:32:26.926582  160939 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:32:26.926653  160939 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:32:26.936733  160939 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:32:26.936821  160939 command_runner.go:130] > [Unit]
	I0522 18:32:26.936842  160939 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:32:26.936853  160939 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:32:26.936864  160939 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:32:26.936876  160939 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:32:26.936886  160939 command_runner.go:130] > Wants=network-online.target
	I0522 18:32:26.936894  160939 command_runner.go:130] > Requires=docker.socket
	I0522 18:32:26.936904  160939 command_runner.go:130] > StartLimitBurst=3
	I0522 18:32:26.936910  160939 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:32:26.936921  160939 command_runner.go:130] > [Service]
	I0522 18:32:26.936928  160939 command_runner.go:130] > Type=notify
	I0522 18:32:26.936937  160939 command_runner.go:130] > Restart=on-failure
	I0522 18:32:26.936949  160939 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:32:26.936965  160939 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:32:26.936979  160939 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:32:26.936992  160939 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:32:26.937014  160939 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:32:26.937027  160939 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:32:26.937042  160939 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:32:26.937058  160939 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:32:26.937072  160939 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:32:26.937081  160939 command_runner.go:130] > ExecStart=
	I0522 18:32:26.937105  160939 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:32:26.937116  160939 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:32:26.937132  160939 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:32:26.937143  160939 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:32:26.937151  160939 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:32:26.937158  160939 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:32:26.937167  160939 command_runner.go:130] > LimitCORE=infinity
	I0522 18:32:26.937177  160939 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:32:26.937188  160939 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:32:26.937197  160939 command_runner.go:130] > TasksMax=infinity
	I0522 18:32:26.937203  160939 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:32:26.937216  160939 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:32:26.937224  160939 command_runner.go:130] > Delegate=yes
	I0522 18:32:26.937234  160939 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:32:26.937243  160939 command_runner.go:130] > KillMode=process
	I0522 18:32:26.937253  160939 command_runner.go:130] > [Install]
	I0522 18:32:26.937263  160939 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:32:26.937834  160939 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:32:26.937891  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:32:26.948358  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:32:26.963466  160939 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:32:26.963527  160939 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:32:26.966525  160939 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:32:26.966635  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:32:26.974160  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:32:26.991240  160939 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:32:27.087184  160939 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:32:27.183939  160939 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:32:27.184074  160939 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:32:27.199707  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.274364  160939 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:32:27.497339  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:32:27.508050  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.517912  160939 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:32:27.594604  160939 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:32:27.603789  160939 command_runner.go:130] ! Created symlink /etc/systemd/system/sockets.target.wants/cri-docker.socket → /lib/systemd/system/cri-docker.socket.
	I0522 18:32:27.670370  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.738915  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:32:27.750303  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:32:27.759297  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:27.830818  160939 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:32:27.886665  160939 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:32:27.886752  160939 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:32:27.890680  160939 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:32:27.890703  160939 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:32:27.890711  160939 command_runner.go:130] > Device: 40h/64d	Inode: 258         Links: 1
	I0522 18:32:27.890720  160939 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:32:27.890729  160939 command_runner.go:130] > Access: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890736  160939 command_runner.go:130] > Modify: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890744  160939 command_runner.go:130] > Change: 2024-05-22 18:32:27.838748230 +0000
	I0522 18:32:27.890751  160939 command_runner.go:130] >  Birth: -
	I0522 18:32:27.890789  160939 start.go:562] Will wait 60s for crictl version
	I0522 18:32:27.890843  160939 ssh_runner.go:195] Run: which crictl
	I0522 18:32:27.893791  160939 command_runner.go:130] > /usr/bin/crictl
	I0522 18:32:27.893846  160939 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:32:27.922140  160939 command_runner.go:130] > Version:  0.1.0
	I0522 18:32:27.922160  160939 command_runner.go:130] > RuntimeName:  docker
	I0522 18:32:27.922164  160939 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:32:27.922170  160939 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:32:27.924081  160939 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:32:27.924147  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.943721  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.943794  160939 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:32:27.963666  160939 command_runner.go:130] > 26.1.2
	I0522 18:32:27.967758  160939 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:32:27.967841  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:32:27.982248  160939 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:32:27.985502  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:27.994876  160939 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Custom
QemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:32:27.994996  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:32:27.995038  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.010537  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.010570  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.010579  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.010586  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.010591  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.010596  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.010603  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.010611  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.011521  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.011540  160939 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:32:28.011593  160939 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:32:28.027292  160939 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:32:28.027322  160939 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:32:28.027331  160939 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:32:28.027336  160939 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:32:28.027341  160939 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:32:28.027345  160939 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:32:28.027350  160939 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:32:28.027355  160939 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:28.028262  160939 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0522 18:32:28.028281  160939 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:32:28.028301  160939 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:32:28.028415  160939 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:32:28.028462  160939 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:32:28.069428  160939 command_runner.go:130] > cgroupfs
	I0522 18:32:28.070479  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:28.070498  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:28.070517  160939 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:32:28.070539  160939 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:32:28.070668  160939 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:32:28.070717  160939 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:32:28.078629  160939 command_runner.go:130] > kubeadm
	I0522 18:32:28.078645  160939 command_runner.go:130] > kubectl
	I0522 18:32:28.078649  160939 command_runner.go:130] > kubelet
	I0522 18:32:28.078672  160939 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:32:28.078732  160939 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:32:28.086243  160939 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:32:28.101448  160939 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:32:28.116571  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:32:28.131251  160939 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:32:28.134083  160939 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:32:28.142915  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:28.220165  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:28.231892  160939 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:32:28.231919  160939 certs.go:194] generating shared ca certs ...
	I0522 18:32:28.231939  160939 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.232062  160939 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:32:28.232110  160939 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:32:28.232120  160939 certs.go:256] generating profile certs ...
	I0522 18:32:28.232166  160939 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:32:28.232179  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt with IP's: []
	I0522 18:32:28.429639  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt ...
	I0522 18:32:28.429667  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt: {Name:mkf8a2953d60a961d7574d013acfe3a49fa0bbfc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429820  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key ...
	I0522 18:32:28.429830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key: {Name:mk8a5d9e68b7e6e877768e7a2b460a40a5615658 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.429900  160939 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:32:28.429915  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.67.2]
	I0522 18:32:28.507177  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 ...
	I0522 18:32:28.507207  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43: {Name:mk09ce970fc623afc85e3fab7e404680e391a586 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507367  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 ...
	I0522 18:32:28.507382  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43: {Name:mkb137dcb8e57c549f50c85273becdd727997895 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.507489  160939 certs.go:381] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt
	I0522 18:32:28.507557  160939 certs.go:385] copying /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43 -> /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key
	I0522 18:32:28.507612  160939 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:32:28.507627  160939 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt with IP's: []
	I0522 18:32:28.617440  160939 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt ...
	I0522 18:32:28.617473  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt: {Name:mk54959ff23e2bad94a115faba59db15d7610b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617661  160939 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key ...
	I0522 18:32:28.617679  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key: {Name:mkd647f7d425cda8f2c79b7f52b5e4d12a0c0d05 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:28.617777  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:32:28.617797  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:32:28.617808  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:32:28.617823  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:32:28.617836  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:32:28.617848  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:32:28.617860  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:32:28.617873  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:32:28.617924  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:32:28.617957  160939 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:32:28.617967  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:32:28.617990  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:32:28.618019  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:32:28.618040  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:32:28.618075  160939 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:32:28.618102  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.618116  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.618128  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.618629  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:32:28.639518  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:32:28.659910  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:32:28.679937  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:32:28.699821  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:32:28.719536  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:32:28.739636  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:32:28.759509  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:32:28.779547  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:32:28.799365  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:32:28.819247  160939 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:32:28.839396  160939 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:32:28.854046  160939 ssh_runner.go:195] Run: openssl version
	I0522 18:32:28.858540  160939 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:32:28.858690  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:32:28.866551  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869507  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869532  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.869569  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:32:28.875214  160939 command_runner.go:130] > b5213941
	I0522 18:32:28.875413  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:32:28.883074  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:32:28.890531  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893535  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893557  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.893596  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:32:28.899083  160939 command_runner.go:130] > 51391683
	I0522 18:32:28.899310  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:32:28.906972  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:32:28.914876  160939 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917837  160939 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917865  160939 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.917909  160939 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:32:28.923606  160939 command_runner.go:130] > 3ec20f2e
	I0522 18:32:28.923823  160939 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:32:28.931516  160939 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:32:28.934218  160939 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934259  160939 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0522 18:32:28.934296  160939 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:32:28.934404  160939 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:32:28.950504  160939 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:32:28.958332  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0522 18:32:28.958356  160939 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0522 18:32:28.958365  160939 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0522 18:32:28.958430  160939 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0522 18:32:28.966017  160939 kubeadm.go:213] ignoring SystemVerification for kubeadm because of docker driver
	I0522 18:32:28.966056  160939 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0522 18:32:28.973169  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0522 18:32:28.973191  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0522 18:32:28.973203  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0522 18:32:28.973217  160939 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973245  160939 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0522 18:32:28.973254  160939 kubeadm.go:156] found existing configuration files:
	
	I0522 18:32:28.973282  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0522 18:32:28.979661  160939 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980332  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0522 18:32:28.980367  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0522 18:32:28.987227  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0522 18:32:28.994428  160939 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994468  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0522 18:32:28.994505  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0522 18:32:29.001374  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.008562  160939 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008604  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0522 18:32:29.008648  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0522 18:32:29.015901  160939 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0522 18:32:29.023088  160939 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023130  160939 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0522 18:32:29.023170  160939 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0522 18:32:29.030242  160939 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0522 18:32:29.069760  160939 kubeadm.go:309] [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069799  160939 command_runner.go:130] > [init] Using Kubernetes version: v1.30.1
	I0522 18:32:29.069836  160939 kubeadm.go:309] [preflight] Running pre-flight checks
	I0522 18:32:29.069844  160939 command_runner.go:130] > [preflight] Running pre-flight checks
	I0522 18:32:29.113834  160939 kubeadm.go:309] [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113865  160939 command_runner.go:130] > [preflight] The system verification failed. Printing the output from the verification:
	I0522 18:32:29.113960  160939 kubeadm.go:309] KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.113987  160939 command_runner.go:130] > KERNEL_VERSION: 5.15.0-1060-gcp
	I0522 18:32:29.114021  160939 kubeadm.go:309] OS: Linux
	I0522 18:32:29.114029  160939 command_runner.go:130] > OS: Linux
	I0522 18:32:29.114085  160939 kubeadm.go:309] CGROUPS_CPU: enabled
	I0522 18:32:29.114092  160939 command_runner.go:130] > CGROUPS_CPU: enabled
	I0522 18:32:29.114134  160939 kubeadm.go:309] CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114140  160939 command_runner.go:130] > CGROUPS_CPUACCT: enabled
	I0522 18:32:29.114177  160939 kubeadm.go:309] CGROUPS_CPUSET: enabled
	I0522 18:32:29.114183  160939 command_runner.go:130] > CGROUPS_CPUSET: enabled
	I0522 18:32:29.114230  160939 kubeadm.go:309] CGROUPS_DEVICES: enabled
	I0522 18:32:29.114237  160939 command_runner.go:130] > CGROUPS_DEVICES: enabled
	I0522 18:32:29.114278  160939 kubeadm.go:309] CGROUPS_FREEZER: enabled
	I0522 18:32:29.114285  160939 command_runner.go:130] > CGROUPS_FREEZER: enabled
	I0522 18:32:29.114324  160939 kubeadm.go:309] CGROUPS_MEMORY: enabled
	I0522 18:32:29.114331  160939 command_runner.go:130] > CGROUPS_MEMORY: enabled
	I0522 18:32:29.114373  160939 kubeadm.go:309] CGROUPS_PIDS: enabled
	I0522 18:32:29.114379  160939 command_runner.go:130] > CGROUPS_PIDS: enabled
	I0522 18:32:29.114421  160939 kubeadm.go:309] CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114428  160939 command_runner.go:130] > CGROUPS_HUGETLB: enabled
	I0522 18:32:29.114464  160939 kubeadm.go:309] CGROUPS_BLKIO: enabled
	I0522 18:32:29.114483  160939 command_runner.go:130] > CGROUPS_BLKIO: enabled
	I0522 18:32:29.173446  160939 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173485  160939 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0522 18:32:29.173623  160939 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173639  160939 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0522 18:32:29.173777  160939 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.173789  160939 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0522 18:32:29.376675  160939 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379640  160939 out.go:204]   - Generating certificates and keys ...
	I0522 18:32:29.376743  160939 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0522 18:32:29.379742  160939 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0522 18:32:29.379760  160939 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0522 18:32:29.379853  160939 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.379864  160939 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0522 18:32:29.571675  160939 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.571705  160939 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0522 18:32:29.667370  160939 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.667408  160939 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0522 18:32:29.730638  160939 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:29.730650  160939 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0522 18:32:30.114166  160939 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.114190  160939 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0522 18:32:30.185007  160939 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185032  160939 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0522 18:32:30.185157  160939 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.185169  160939 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376151  160939 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376188  160939 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0522 18:32:30.376347  160939 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.376364  160939 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-737786] and IPs [192.168.67.2 127.0.0.1 ::1]
	I0522 18:32:30.621621  160939 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.621651  160939 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0522 18:32:30.882886  160939 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.882922  160939 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0522 18:32:30.976851  160939 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0522 18:32:30.976877  160939 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0522 18:32:30.976927  160939 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:30.976932  160939 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0522 18:32:31.205083  160939 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.205126  160939 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0522 18:32:31.287749  160939 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.287812  160939 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0522 18:32:31.548360  160939 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.548390  160939 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0522 18:32:31.793952  160939 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.793983  160939 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0522 18:32:31.889475  160939 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.889508  160939 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0522 18:32:31.890099  160939 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.890122  160939 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0522 18:32:31.892764  160939 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895234  160939 out.go:204]   - Booting up control plane ...
	I0522 18:32:31.892832  160939 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0522 18:32:31.895375  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895388  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0522 18:32:31.895507  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895522  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0522 18:32:31.895605  160939 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.895619  160939 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0522 18:32:31.903936  160939 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.903958  160939 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0522 18:32:31.904721  160939 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904737  160939 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0522 18:32:31.904800  160939 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0522 18:32:31.904815  160939 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0522 18:32:31.989235  160939 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989268  160939 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0522 18:32:31.989364  160939 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:31.989377  160939 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0522 18:32:32.490313  160939 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490352  160939 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.202571ms
	I0522 18:32:32.490462  160939 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:32.490478  160939 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0522 18:32:36.991403  160939 kubeadm.go:309] [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:36.991445  160939 command_runner.go:130] > [api-check] The API server is healthy after 4.501039406s
	I0522 18:32:37.002153  160939 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.002184  160939 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0522 18:32:37.012503  160939 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.012532  160939 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0522 18:32:37.028436  160939 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028465  160939 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0522 18:32:37.028707  160939 kubeadm.go:309] [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.028725  160939 command_runner.go:130] > [mark-control-plane] Marking the node multinode-737786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0522 18:32:37.035001  160939 kubeadm.go:309] [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.035012  160939 command_runner.go:130] > [bootstrap-token] Using token: 941jnz.o7vwsajypu1e25vn
	I0522 18:32:37.036324  160939 out.go:204]   - Configuring RBAC rules ...
	I0522 18:32:37.036438  160939 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.036450  160939 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0522 18:32:37.039237  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.039252  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0522 18:32:37.044789  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.044808  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0522 18:32:37.047056  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.047074  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0522 18:32:37.049159  160939 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.049174  160939 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0522 18:32:37.051503  160939 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.051520  160939 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0522 18:32:37.397004  160939 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.397044  160939 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0522 18:32:37.813980  160939 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0522 18:32:37.814007  160939 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0522 18:32:38.397032  160939 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.397056  160939 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0522 18:32:38.398018  160939 kubeadm.go:309] 
	I0522 18:32:38.398101  160939 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398119  160939 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0522 18:32:38.398137  160939 kubeadm.go:309] 
	I0522 18:32:38.398211  160939 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398218  160939 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0522 18:32:38.398222  160939 kubeadm.go:309] 
	I0522 18:32:38.398246  160939 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0522 18:32:38.398255  160939 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0522 18:32:38.398337  160939 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398355  160939 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0522 18:32:38.398434  160939 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398443  160939 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0522 18:32:38.398453  160939 kubeadm.go:309] 
	I0522 18:32:38.398515  160939 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398522  160939 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0522 18:32:38.398529  160939 kubeadm.go:309] 
	I0522 18:32:38.398609  160939 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398618  160939 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0522 18:32:38.398622  160939 kubeadm.go:309] 
	I0522 18:32:38.398664  160939 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0522 18:32:38.398677  160939 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0522 18:32:38.398789  160939 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398800  160939 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0522 18:32:38.398863  160939 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398869  160939 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0522 18:32:38.398873  160939 kubeadm.go:309] 
	I0522 18:32:38.398944  160939 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.398950  160939 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0522 18:32:38.399022  160939 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0522 18:32:38.399032  160939 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0522 18:32:38.399037  160939 kubeadm.go:309] 
	I0522 18:32:38.399123  160939 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399130  160939 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399216  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399222  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e \
	I0522 18:32:38.399239  160939 kubeadm.go:309] 	--control-plane 
	I0522 18:32:38.399245  160939 command_runner.go:130] > 	--control-plane 
	I0522 18:32:38.399248  160939 kubeadm.go:309] 
	I0522 18:32:38.399370  160939 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399378  160939 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0522 18:32:38.399382  160939 kubeadm.go:309] 
	I0522 18:32:38.399476  160939 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399489  160939 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 941jnz.o7vwsajypu1e25vn \
	I0522 18:32:38.399636  160939 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.399649  160939 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:570edbfe7b2478083ade58e1456273e7a23d9332ea9b48503bd1fbf2d9614c7e 
	I0522 18:32:38.401263  160939 kubeadm.go:309] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401277  160939 command_runner.go:130] ! 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1060-gcp\n", err: exit status 1
	I0522 18:32:38.401363  160939 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401380  160939 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0522 18:32:38.401398  160939 cni.go:84] Creating CNI manager for ""
	I0522 18:32:38.401406  160939 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0522 18:32:38.403405  160939 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0522 18:32:38.404599  160939 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0522 18:32:38.408100  160939 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0522 18:32:38.408121  160939 command_runner.go:130] >   Size: 4085020   	Blocks: 7992       IO Block: 4096   regular file
	I0522 18:32:38.408128  160939 command_runner.go:130] > Device: 37h/55d	Inode: 808770      Links: 1
	I0522 18:32:38.408133  160939 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:32:38.408141  160939 command_runner.go:130] > Access: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408145  160939 command_runner.go:130] > Modify: 2023-12-04 16:39:01.000000000 +0000
	I0522 18:32:38.408150  160939 command_runner.go:130] > Change: 2024-05-22 17:45:13.285811920 +0000
	I0522 18:32:38.408155  160939 command_runner.go:130] >  Birth: 2024-05-22 17:45:13.257809894 +0000
	I0522 18:32:38.408204  160939 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.1/kubectl ...
	I0522 18:32:38.408217  160939 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0522 18:32:38.424237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0522 18:32:38.586825  160939 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.590952  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0522 18:32:38.596051  160939 command_runner.go:130] > serviceaccount/kindnet created
	I0522 18:32:38.602929  160939 command_runner.go:130] > daemonset.apps/kindnet created
	I0522 18:32:38.606148  160939 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0522 18:32:38.606224  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.606247  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-737786 minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700 minikube.k8s.io/version=v1.33.1 minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9 minikube.k8s.io/name=multinode-737786 minikube.k8s.io/primary=true
	I0522 18:32:38.613527  160939 command_runner.go:130] > -16
	I0522 18:32:38.613563  160939 ops.go:34] apiserver oom_adj: -16
	I0522 18:32:38.671101  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0522 18:32:38.671199  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:38.679745  160939 command_runner.go:130] > node/multinode-737786 labeled
	I0522 18:32:38.773177  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.171792  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.232239  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:39.671894  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:39.732898  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.171368  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.228640  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:40.671860  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:40.732183  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.171401  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.231451  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:41.672085  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:41.732558  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.172181  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.230594  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:42.672237  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:42.733746  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.171306  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.233896  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:43.671416  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:43.730755  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.171408  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.231441  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:44.672067  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:44.729906  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.171343  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.231696  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:45.671243  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:45.732606  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.172238  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.229695  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:46.671885  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:46.731711  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.171960  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.228503  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:47.671939  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:47.733171  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.171805  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.230525  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:48.672280  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:48.731666  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.171973  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.230294  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:49.671915  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:49.733184  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.171393  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.230515  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:50.672155  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:50.732157  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.171406  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.266742  160939 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0522 18:32:51.671250  160939 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0522 18:32:51.747943  160939 command_runner.go:130] > NAME      SECRETS   AGE
	I0522 18:32:51.747967  160939 command_runner.go:130] > default   0         0s
	I0522 18:32:51.747991  160939 kubeadm.go:1107] duration metric: took 13.141832952s to wait for elevateKubeSystemPrivileges
	W0522 18:32:51.748021  160939 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0522 18:32:51.748034  160939 kubeadm.go:393] duration metric: took 22.813740637s to StartCluster
	I0522 18:32:51.748054  160939 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.748131  160939 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.748830  160939 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:32:51.749052  160939 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:32:51.750591  160939 out.go:177] * Verifying Kubernetes components...
	I0522 18:32:51.749093  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0522 18:32:51.749107  160939 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:32:51.749382  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:32:51.752222  160939 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:32:51.752296  160939 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:32:51.752312  160939 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:32:51.752326  160939 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	I0522 18:32:51.752339  160939 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:32:51.752357  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.752681  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.752857  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.774832  160939 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:32:51.775039  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.776160  160939 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:51.776175  160939 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:32:51.776227  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.776423  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.776863  160939 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:32:51.776981  160939 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	I0522 18:32:51.777016  160939 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:32:51.777336  160939 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:32:51.795509  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.796953  160939 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:51.796975  160939 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:32:51.797025  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:32:51.814477  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:32:51.870824  160939 command_runner.go:130] > apiVersion: v1
	I0522 18:32:51.870847  160939 command_runner.go:130] > data:
	I0522 18:32:51.870853  160939 command_runner.go:130] >   Corefile: |
	I0522 18:32:51.870859  160939 command_runner.go:130] >     .:53 {
	I0522 18:32:51.870863  160939 command_runner.go:130] >         errors
	I0522 18:32:51.870869  160939 command_runner.go:130] >         health {
	I0522 18:32:51.870875  160939 command_runner.go:130] >            lameduck 5s
	I0522 18:32:51.870881  160939 command_runner.go:130] >         }
	I0522 18:32:51.870894  160939 command_runner.go:130] >         ready
	I0522 18:32:51.870908  160939 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0522 18:32:51.870919  160939 command_runner.go:130] >            pods insecure
	I0522 18:32:51.870929  160939 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0522 18:32:51.870939  160939 command_runner.go:130] >            ttl 30
	I0522 18:32:51.870946  160939 command_runner.go:130] >         }
	I0522 18:32:51.870957  160939 command_runner.go:130] >         prometheus :9153
	I0522 18:32:51.870967  160939 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0522 18:32:51.870977  160939 command_runner.go:130] >            max_concurrent 1000
	I0522 18:32:51.870983  160939 command_runner.go:130] >         }
	I0522 18:32:51.870993  160939 command_runner.go:130] >         cache 30
	I0522 18:32:51.871002  160939 command_runner.go:130] >         loop
	I0522 18:32:51.871009  160939 command_runner.go:130] >         reload
	I0522 18:32:51.871022  160939 command_runner.go:130] >         loadbalance
	I0522 18:32:51.871031  160939 command_runner.go:130] >     }
	I0522 18:32:51.871038  160939 command_runner.go:130] > kind: ConfigMap
	I0522 18:32:51.871047  160939 command_runner.go:130] > metadata:
	I0522 18:32:51.871058  160939 command_runner.go:130] >   creationTimestamp: "2024-05-22T18:32:37Z"
	I0522 18:32:51.871067  160939 command_runner.go:130] >   name: coredns
	I0522 18:32:51.871075  160939 command_runner.go:130] >   namespace: kube-system
	I0522 18:32:51.871086  160939 command_runner.go:130] >   resourceVersion: "229"
	I0522 18:32:51.871097  160939 command_runner.go:130] >   uid: d6517ddd-1175-4a40-a10d-60d1d382d7ae
	I0522 18:32:51.892382  160939 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:32:51.892495  160939 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.67.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0522 18:32:51.950050  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:51.950378  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:51.950733  160939 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.950852  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:51.950863  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.950877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.950889  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.959546  160939 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0522 18:32:51.959576  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.959584  160939 round_trippers.go:580]     Audit-Id: 5ddc21bd-b1b2-4ea2-81cf-c014c9a04f15
	I0522 18:32:51.959590  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.959595  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.959598  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.959602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.959606  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.959736  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:51.960668  160939 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:32:51.960761  160939 node_ready.go:38] duration metric: took 9.99326ms for node "multinode-737786" to be "Ready" ...
	I0522 18:32:51.960805  160939 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:32:51.960931  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:32:51.960963  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.960982  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.960996  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:51.964902  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:51.964929  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:51.964939  160939 round_trippers.go:580]     Audit-Id: 8b3d34ee-cdb3-49cd-991b-94f61024f9e2
	I0522 18:32:51.964945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:51.964952  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:51.964972  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:51.964977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:51.964987  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:51 GMT
	I0522 18:32:51.965722  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"354"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 59005 chars]
	I0522 18:32:51.970917  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	I0522 18:32:51.971068  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:51.971109  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:51.971130  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:51.971146  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.043914  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:32:52.045304  160939 round_trippers.go:574] Response Status: 200 OK in 74 milliseconds
	I0522 18:32:52.045329  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.045339  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.045343  160939 round_trippers.go:580]     Audit-Id: bed69948-0150-43f6-8c9c-dfd39f8a81e4
	I0522 18:32:52.045349  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.045354  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.045361  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.045365  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.046685  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.047307  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.047329  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.047339  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.047344  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.049383  160939 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:32:52.051476  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.051500  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.051510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.051516  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.051520  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.051524  160939 round_trippers.go:580]     Audit-Id: 2d50dfec-8764-4cd8-92b8-99f40ba4532d
	I0522 18:32:52.051530  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.051543  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.051659  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.471981  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.472002  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.472013  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.472019  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.547388  160939 round_trippers.go:574] Response Status: 200 OK in 75 milliseconds
	I0522 18:32:52.547416  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.547425  160939 round_trippers.go:580]     Audit-Id: 3eb91eea-1138-4663-bd0b-d4f080c3a1ee
	I0522 18:32:52.547430  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.547435  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.547439  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.547457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.547463  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.547916  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"352","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6107 chars]
	I0522 18:32:52.548699  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.548751  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.548782  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.548796  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.554135  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.554200  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.554224  160939 round_trippers.go:580]     Audit-Id: c62627b8-a513-4303-8697-a7fe1f12763e
	I0522 18:32:52.554239  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.554272  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.554291  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.554304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.554318  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.554527  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:52.556697  160939 command_runner.go:130] > configmap/coredns replaced
	I0522 18:32:52.556753  160939 start.go:946] {"host.minikube.internal": 192.168.67.1} host record injected into CoreDNS's ConfigMap
	I0522 18:32:52.557175  160939 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:32:52.557491  160939 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:32:52.557873  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.557907  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.557920  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.557932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558046  160939 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0522 18:32:52.558165  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:32:52.558237  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.558260  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.558272  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.560256  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:52.560319  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.560338  160939 round_trippers.go:580]     Audit-Id: 12b0e11e-6a44-4304-a157-2b7055e2205e
	I0522 18:32:52.560351  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.560363  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.560396  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.560416  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.560431  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.560444  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.560488  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561030  160939 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"353","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.561137  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:52.561162  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.561192  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.561209  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.561222  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.561529  160939 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:32:52.561547  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.561556  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.561562  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.561567  160939 round_trippers.go:580]     Content-Length: 1273
	I0522 18:32:52.561573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.561577  160939 round_trippers.go:580]     Audit-Id: e2fb2ed9-f480-430a-b9b8-1cb5e5498c36
	I0522 18:32:52.561587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.561592  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.561795  160939 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"359"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0522 18:32:52.562115  160939 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.562161  160939 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:32:52.562173  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.562180  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.562188  160939 round_trippers.go:473]     Content-Type: application/json
	I0522 18:32:52.562193  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.566308  160939 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:32:52.566355  160939 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:32:52.566400  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566361  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.566429  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566439  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566449  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566457  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566463  160939 round_trippers.go:580]     Content-Length: 1220
	I0522 18:32:52.566468  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566473  160939 round_trippers.go:580]     Audit-Id: 6b60d46d-17ef-45bb-880c-06c439fe9bab
	I0522 18:32:52.566411  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.566491  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.566498  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.566501  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:52.566505  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.566505  160939 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:32:52.566509  160939 round_trippers.go:580]     Audit-Id: 2b01bd0d-fb2f-4a1e-8831-7dc2e68860f5
	I0522 18:32:52.566521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.566538  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"360","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:52.972030  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:52.972055  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.972069  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.972073  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.973864  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.973887  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.973900  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.973905  160939 round_trippers.go:580]     Audit-Id: 487db757-1a6c-442b-b5d4-799652d478f6
	I0522 18:32:52.973912  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.973918  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.973922  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.973927  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.974296  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:52.974890  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:52.974910  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:52.974922  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:52.974927  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:52.976545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:52.976564  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:52.976574  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:52.976579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:52.976584  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:52.976589  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:52 GMT
	I0522 18:32:52.976594  160939 round_trippers.go:580]     Audit-Id: 785dc732-84fe-4320-964c-c2a36a76c8f6
	I0522 18:32:52.976600  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:52.976934  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.058578  160939 round_trippers.go:463] GET https://192.168.67.2:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0522 18:32:53.058609  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.058620  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.058627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.061245  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.061289  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.061299  160939 round_trippers.go:580]     Content-Length: 291
	I0522 18:32:53.061340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.061372  160939 round_trippers.go:580]     Audit-Id: 77d818dd-5f3a-495e-b1ef-ad1a288275fa
	I0522 18:32:53.061388  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.061402  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.061415  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.061432  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.061472  160939 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"71b5615b-3c74-4b26-896b-a9f977849bfd","resourceVersion":"370","creationTimestamp":"2024-05-22T18:32:37Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0522 18:32:53.061571  160939 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-737786" context rescaled to 1 replicas
	I0522 18:32:53.076516  160939 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0522 18:32:53.076577  160939 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0522 18:32:53.076599  160939 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076613  160939 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0522 18:32:53.076633  160939 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0522 18:32:53.076657  160939 command_runner.go:130] > pod/storage-provisioner created
	I0522 18:32:53.076679  160939 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.02727208s)
	I0522 18:32:53.079116  160939 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:32:53.080504  160939 addons.go:505] duration metric: took 1.3313922s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:32:53.471419  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.471453  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.471462  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.471488  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.473769  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.473791  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.473800  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.473806  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.473811  160939 round_trippers.go:580]     Audit-Id: 19f0699f-65e4-4321-a5c4-f6dcf712595d
	I0522 18:32:53.473821  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.473827  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.473830  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.474009  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.474506  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.474523  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.474532  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.474538  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.476545  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.476568  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.476579  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.476584  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.476591  160939 round_trippers.go:580]     Audit-Id: 723b363a-893a-4a61-92a4-6c8128f0cdae
	I0522 18:32:53.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.476602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.476735  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.971555  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:53.971574  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.971587  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.971591  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.973627  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:53.973649  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.973659  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.973664  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.973670  160939 round_trippers.go:580]     Audit-Id: e1a5610a-326e-418b-be80-a1b218bad573
	I0522 18:32:53.973679  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.973686  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.973691  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.973900  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"364","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6182 chars]
	I0522 18:32:53.974364  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:53.974377  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:53.974386  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:53.974395  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:53.976104  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:53.976125  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:53.976134  160939 round_trippers.go:580]     Audit-Id: 1d117d40-7bef-4873-8469-b7cbb9e6e3e0
	I0522 18:32:53.976139  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:53.976143  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:53.976148  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:53.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:53.976158  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:53 GMT
	I0522 18:32:53.976278  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:53.976641  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:54.471526  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.471550  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.471561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.471566  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.473892  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.473909  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.473916  160939 round_trippers.go:580]     Audit-Id: 38fa8439-426c-4d8e-8939-768fdd726b5d
	I0522 18:32:54.473920  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.473923  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.473929  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.473935  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.473939  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.474175  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.474657  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.474672  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.474679  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.474682  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.476422  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.476440  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.476449  160939 round_trippers.go:580]     Audit-Id: a464492a-887c-4ec3-9a36-841c6416e733
	I0522 18:32:54.476454  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.476458  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.476461  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.476465  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.476470  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.476646  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:54.971300  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:54.971328  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.971338  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.971345  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.973536  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:54.973554  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.973560  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.973564  160939 round_trippers.go:580]     Audit-Id: 233e0e2b-7f8e-4aa8-8c2e-b30dfaf9e4ee
	I0522 18:32:54.973569  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.973575  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.973580  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.973588  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.973824  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:54.974258  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:54.974270  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:54.974277  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:54.974281  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:54.976126  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:54.976141  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:54.976153  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:54.976157  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:54.976161  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:54 GMT
	I0522 18:32:54.976166  160939 round_trippers.go:580]     Audit-Id: 72f4a310-bf67-444b-9e24-1577b45c6c56
	I0522 18:32:54.976171  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:54.976176  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:54.976347  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.471862  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.471892  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.471903  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.471908  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.474083  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:55.474099  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.474105  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.474108  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.474111  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.474114  160939 round_trippers.go:580]     Audit-Id: 8719e64b-1bf6-4245-a412-eed38a58d1ce
	I0522 18:32:55.474117  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.474121  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.474290  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.474797  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.474823  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.474832  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.474840  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.476324  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.476342  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.476349  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.476355  160939 round_trippers.go:580]     Audit-Id: db213f13-4ec8-4ca3-8987-3f1626a1ad2d
	I0522 18:32:55.476361  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.476365  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.476368  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.476372  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.476512  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.972155  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:55.972178  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.972186  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.972189  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.973945  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.973967  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.973975  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.973981  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.973987  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.973990  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.973994  160939 round_trippers.go:580]     Audit-Id: a2f51de9-bbaf-49c3-b52e-cd37fc92f529
	I0522 18:32:55.973999  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.974153  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:55.974595  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:55.974611  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:55.974621  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:55.974627  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:55.976270  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:55.976293  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:55.976301  160939 round_trippers.go:580]     Audit-Id: 93227216-8ffe-41b3-8a0d-0b4e86a54912
	I0522 18:32:55.976306  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:55.976310  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:55.976315  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:55.976319  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:55.976325  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:55 GMT
	I0522 18:32:55.976427  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:55.976688  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:56.472139  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.472158  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.472167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.472170  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.474238  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.474260  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.474268  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.474274  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.474279  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.474283  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.474287  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.474292  160939 round_trippers.go:580]     Audit-Id: f67f7ae7-b10d-49f2-94a9-005c4a460c94
	I0522 18:32:56.474484  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.474925  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.474940  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.474946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.474951  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.476537  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.476552  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.476558  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.476563  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.476567  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.476570  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.476573  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.476576  160939 round_trippers.go:580]     Audit-Id: 518e1062-0e5b-47ad-b60f-0ff66e25a622
	I0522 18:32:56.476712  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:56.971350  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:56.971373  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.971381  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.971384  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.973476  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:56.973497  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.973506  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.973511  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.973517  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.973523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.973527  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.973531  160939 round_trippers.go:580]     Audit-Id: eedbefe3-18e8-407d-9ede-0033266cdf11
	I0522 18:32:56.973633  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:56.974094  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:56.974111  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:56.974118  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:56.974123  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:56.975718  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:56.975738  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:56.975747  160939 round_trippers.go:580]     Audit-Id: 74afa443-a147-43c7-8759-9886afead09a
	I0522 18:32:56.975753  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:56.975758  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:56.975764  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:56.975768  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:56.975771  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:56 GMT
	I0522 18:32:56.975928  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.471499  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.471522  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.471528  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.471532  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.473644  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.473662  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.473668  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.473671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.473674  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.473677  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.473680  160939 round_trippers.go:580]     Audit-Id: 2eec1341-a4a0-4edc-9eab-dd0cee12d4eb
	I0522 18:32:57.473682  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.473870  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.474329  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.474343  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.474350  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.474353  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.475871  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.475886  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.475896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.475901  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.475906  160939 round_trippers.go:580]     Audit-Id: 7e8e4b95-aa91-463a-8f1e-a7944e5daa49
	I0522 18:32:57.475911  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.475916  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.475920  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.476058  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.971752  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:57.971774  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.971786  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.971790  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.974020  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:57.974037  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.974043  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.974047  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.974051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.974054  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.974057  160939 round_trippers.go:580]     Audit-Id: 9042de65-ddca-4653-8deb-6e07b20ad9d2
	I0522 18:32:57.974061  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.974263  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:57.974686  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:57.974698  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:57.974705  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:57.974709  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:57.976426  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:57.976445  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:57.976453  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:57.976459  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:57.976464  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:57.976467  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:57 GMT
	I0522 18:32:57.976472  160939 round_trippers.go:580]     Audit-Id: 9526988d-2210-4a9c-a210-f69ada2f111e
	I0522 18:32:57.976478  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:57.976615  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"322","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:57.976919  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:32:58.471854  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.471880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.471893  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.471899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.474173  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.474197  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.474206  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.474211  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.474216  160939 round_trippers.go:580]     Audit-Id: 0827c408-752f-4496-b2bf-06881300dabc
	I0522 18:32:58.474220  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.474224  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.474229  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.474408  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.474983  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.474998  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.475008  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.475014  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.476910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.476934  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.476952  160939 round_trippers.go:580]     Audit-Id: 338928cb-0e5e-4004-be77-29760ea7f6ae
	I0522 18:32:58.476958  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.476962  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.476966  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.476971  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.476986  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.477133  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:58.972097  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:58.972125  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.972137  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.972141  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.974651  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:58.974676  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.974683  160939 round_trippers.go:580]     Audit-Id: 3b3e33fc-c0a8-4a82-9e28-68c6c5eaf90e
	I0522 18:32:58.974688  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.974692  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.974695  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.974698  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.974707  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.974973  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:58.975580  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:58.975600  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:58.975610  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:58.975615  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:58.977624  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:58.977644  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:58.977654  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:58.977661  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:58.977666  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:58.977671  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:58.977676  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:58 GMT
	I0522 18:32:58.977680  160939 round_trippers.go:580]     Audit-Id: aa509792-9021-4f49-a36b-6862ae864dbf
	I0522 18:32:58.977836  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.471442  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.471471  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.471481  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.471486  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.473954  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.473974  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.473983  160939 round_trippers.go:580]     Audit-Id: 04e773e3-ead6-4608-b93f-200b1f7771a2
	I0522 18:32:59.473989  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.473992  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.473997  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.474001  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.474005  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.474205  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.474819  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.474880  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.474905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.474923  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.476903  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.476923  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.476932  160939 round_trippers.go:580]     Audit-Id: 57919320-6611-4945-a59e-eab9e9d1f7e3
	I0522 18:32:59.476937  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.476943  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.476949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.476953  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.476958  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.477092  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.971835  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:32:59.971912  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.971932  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.971946  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.974565  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:32:59.974586  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.974602  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.974606  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.974610  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.974614  160939 round_trippers.go:580]     Audit-Id: 4509f4e5-e206-4cb4-9616-c5dedd8269bf
	I0522 18:32:59.974619  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.974624  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.974794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:32:59.975386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:32:59.975404  160939 round_trippers.go:469] Request Headers:
	I0522 18:32:59.975413  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:32:59.975419  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:32:59.977401  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:32:59.977425  160939 round_trippers.go:577] Response Headers:
	I0522 18:32:59.977434  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:32:59.977440  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:32:59.977445  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:32:59.977449  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:32:59 GMT
	I0522 18:32:59.977453  160939 round_trippers.go:580]     Audit-Id: ba22dbea-6d68-4ec4-bcad-c24172ba5062
	I0522 18:32:59.977458  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:32:59.977594  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:32:59.977937  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:00.471222  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.471241  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.471249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.471252  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.473593  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.473618  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.473629  160939 round_trippers.go:580]     Audit-Id: c4fb389b-3f7d-490e-a802-3bf985dfd423
	I0522 18:33:00.473636  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.473641  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.473645  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.473651  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.473656  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.473892  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.474545  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.474565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.474576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.474581  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.476561  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.476581  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.476590  160939 round_trippers.go:580]     Audit-Id: 67254c57-0400-4b43-af9d-f4913af7b105
	I0522 18:33:00.476595  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.476599  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.476603  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.476608  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.476611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.476748  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:00.971233  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:00.971261  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.971299  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.971306  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.973731  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:00.973750  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.973758  160939 round_trippers.go:580]     Audit-Id: 2f76e9b4-7689-4d89-b284-e9126bd9bad5
	I0522 18:33:00.973762  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.973765  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.973771  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.973774  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.973784  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.974017  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:00.974608  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:00.974625  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:00.974634  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:00.974639  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:00.976439  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:00.976457  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:00.976465  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:00 GMT
	I0522 18:33:00.976470  160939 round_trippers.go:580]     Audit-Id: f4fe94f7-5d5c-4b51-a0c7-f46b19a6f0d4
	I0522 18:33:00.976477  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:00.976485  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:00.976494  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:00.976502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:00.976610  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.471893  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.471931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.471942  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.471949  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.474657  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.474680  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.474688  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.474696  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.474702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.474725  160939 round_trippers.go:580]     Audit-Id: f26f6817-f4b1-4acb-bdf5-088215c31307
	I0522 18:33:01.474736  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.474740  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.474974  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.475618  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.475639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.475649  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.475655  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.477465  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.477487  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.477497  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.477505  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.477510  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.477514  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.477517  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.477524  160939 round_trippers.go:580]     Audit-Id: 1977529f-1acd-423c-9682-42cf6dd4398d
	I0522 18:33:01.477708  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:01.971204  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:01.971371  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.971388  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.971393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974041  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:01.974091  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.974104  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.974111  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.974116  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.974121  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.974127  160939 round_trippers.go:580]     Audit-Id: 292c70c4-b00e-4836-b96a-6c8a747f9bd9
	I0522 18:33:01.974131  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.974293  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:01.974866  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:01.974888  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:01.974899  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:01.974905  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:01.976825  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:01.976848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:01.976856  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:01.976862  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:01 GMT
	I0522 18:33:01.976868  160939 round_trippers.go:580]     Audit-Id: 388c0271-dee4-4384-b77b-c690f1d36c5a
	I0522 18:33:01.976873  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:01.976880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:01.976883  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:01.977037  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.471454  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.471549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.471565  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.471574  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.474157  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.474178  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.474186  160939 round_trippers.go:580]     Audit-Id: 82bb2437-1ea8-4e8d-9e5f-70376d7ee9ee
	I0522 18:33:02.474192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.474196  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.474200  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.474205  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.474208  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.474392  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.475060  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.475077  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.475087  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.475092  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.477070  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.477099  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.477109  160939 round_trippers.go:580]     Audit-Id: 67eab720-8fd6-4965-a754-5010c88a7253
	I0522 18:33:02.477116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.477120  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.477124  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.477127  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.477131  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.477280  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:02.477649  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:02.971540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:02.971565  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.971576  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.971582  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.974293  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:02.974315  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.974325  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.974330  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.974335  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.974340  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.974345  160939 round_trippers.go:580]     Audit-Id: ad75c6ab-9962-47cf-be26-f410ec61bd12
	I0522 18:33:02.974350  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.974587  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:02.975218  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:02.975239  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:02.975249  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:02.975258  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:02.977182  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:02.977245  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:02.977260  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:02 GMT
	I0522 18:33:02.977266  160939 round_trippers.go:580]     Audit-Id: c0467f5a-9a3a-40e8-b473-9c175fd6891e
	I0522 18:33:02.977271  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:02.977277  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:02.977284  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:02.977288  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:02.977392  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.472108  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.472133  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.472143  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.472149  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.474741  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.474768  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.474778  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.474782  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.474787  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.474792  160939 round_trippers.go:580]     Audit-Id: 1b9bea48-179f-40ca-a879-0e436eb40d14
	I0522 18:33:03.474797  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.474801  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.474970  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.475572  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.475591  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.475601  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.475607  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.477470  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.477489  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.477497  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.477502  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.477506  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.477511  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.477515  160939 round_trippers.go:580]     Audit-Id: b00b1393-d773-4e79-83a7-fbadc0d83dce
	I0522 18:33:03.477521  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.477650  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:03.971411  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:03.971440  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.971450  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.971455  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.974132  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:03.974155  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.974164  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.974171  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.974176  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.974180  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.974185  160939 round_trippers.go:580]     Audit-Id: 2b46951a-0d87-464c-b928-e0491b518b0e
	I0522 18:33:03.974192  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.974344  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:03.974929  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:03.974949  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:03.974959  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:03.974965  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:03.976727  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:03.976759  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:03.976769  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:03 GMT
	I0522 18:33:03.976775  160939 round_trippers.go:580]     Audit-Id: efda080a-3af4-4b70-aa46-baefc2b1a086
	I0522 18:33:03.976779  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:03.976784  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:03.976788  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:03.976792  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:03.977006  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.471440  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.471466  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.471475  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.471478  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.473781  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.473798  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.473806  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.473812  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.473823  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.473828  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.473832  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.473837  160939 round_trippers.go:580]     Audit-Id: 584fe422-d82d-4c7e-81d2-665d8be8873b
	I0522 18:33:04.474014  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.474484  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.474542  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.474564  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.474581  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.476818  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.476848  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.476856  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.476862  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.476866  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.476872  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.476877  160939 round_trippers.go:580]     Audit-Id: 577875ba-d973-41fb-8b48-0973202f1354
	I0522 18:33:04.476885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.477034  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.971729  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:04.971751  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.971759  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.971763  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.974273  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:04.974295  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.974304  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.974311  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.974318  160939 round_trippers.go:580]     Audit-Id: e77cbda3-9098-456e-962d-06d9e7e98aee
	I0522 18:33:04.974323  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.974336  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.974341  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.974475  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:04.975121  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:04.975157  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:04.975167  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:04.975172  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:04.977047  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:04.977076  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:04.977086  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:04.977094  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:04.977102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:04 GMT
	I0522 18:33:04.977110  160939 round_trippers.go:580]     Audit-Id: 15591115-c0cb-473f-90d4-6c56cf6353d7
	I0522 18:33:04.977116  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:04.977124  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:04.977257  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:04.977558  160939 pod_ready.go:102] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status "Ready":"False"
	I0522 18:33:05.471962  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.471987  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.471997  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.472003  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.474481  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.474506  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.474516  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.474523  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.474527  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.474532  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.474536  160939 round_trippers.go:580]     Audit-Id: fdb343ad-37ed-4d5e-8481-409ca7bff1bb
	I0522 18:33:05.474542  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.474675  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.475316  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.475335  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.475345  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.475349  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.477162  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:05.477192  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.477208  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.477219  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.477224  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.477230  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.477237  160939 round_trippers.go:580]     Audit-Id: 5a4a1adb-a9e7-45d6-89b9-6f8cbdc8e14f
	I0522 18:33:05.477241  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.477365  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:05.971575  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:05.971603  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.971614  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.971620  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.973961  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.973988  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.973998  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.974005  160939 round_trippers.go:580]     Audit-Id: 6cf57dbb-f61f-4a34-ba71-0fa1a7be6c2f
	I0522 18:33:05.974009  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.974015  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.974020  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.974024  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.974227  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"396","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6411 chars]
	I0522 18:33:05.974844  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:05.974866  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:05.974877  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:05.974885  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:05.976914  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:05.976937  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:05.976948  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:05 GMT
	I0522 18:33:05.976955  160939 round_trippers.go:580]     Audit-Id: f5c6902b-e141-4739-b75c-abe5d7d10bcc
	I0522 18:33:05.976962  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:05.976969  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:05.976977  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:05.976982  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:05.977139  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.471359  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-fhhmr
	I0522 18:33:06.471382  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.471390  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.471393  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.473976  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.473998  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.474008  160939 round_trippers.go:580]     Audit-Id: 678a5898-c668-42b8-9f9d-cd08c0af9f0a
	I0522 18:33:06.474014  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.474021  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.474026  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.474032  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.474036  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.474212  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-fhhmr","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"be9eeea7-ca23-4606-8965-0eb7a95e4a0d","resourceVersion":"419","creationTimestamp":"2024-05-22T18:32:51Z","deletionTimestamp":"2024-05-22T18:33:22Z","deletionGracePeriodSeconds":30,"labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:pod
AntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecutio [truncated 6465 chars]
	I0522 18:33:06.474787  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.474806  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.474816  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.474824  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.476696  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.476720  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.476727  160939 round_trippers.go:580]     Audit-Id: 08522360-196f-4610-a526-8fbc3b876994
	I0522 18:33:06.476732  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.476736  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.476739  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.476742  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.476754  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.476918  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.477418  160939 pod_ready.go:97] pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.67.2 HostIPs:[{IP:192.168.67.2
}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477449  160939 pod_ready.go:81] duration metric: took 14.506466075s for pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace to be "Ready" ...
	E0522 18:33:06.477464  160939 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-7db6d8ff4d-fhhmr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-05-22 18:32:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.6
7.2 HostIPs:[{IP:192.168.67.2}] PodIP: PodIPs:[] StartTime:2024-05-22 18:32:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-05-22 18:32:53 +0000 UTC,FinishedAt:2024-05-22 18:33:06 +0000 UTC,ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.1 ImageID:docker-pullable://registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1 ContainerID:docker://ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de Started:0xc001d9d6c0 AllocatedResources:map[] Resources:nil VolumeMounts:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0522 18:33:06.477476  160939 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:06.477540  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.477549  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.477558  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.477569  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.479562  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.479577  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.479583  160939 round_trippers.go:580]     Audit-Id: 9a30cf33-1204-4670-a99f-86946c97d423
	I0522 18:33:06.479587  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.479591  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.479597  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.479605  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.479611  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.479794  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.480253  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.480269  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.480275  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.480279  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.481839  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.481857  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.481867  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.481872  160939 round_trippers.go:580]     Audit-Id: fa40a49d-204f-481d-8912-a34512c1ae3b
	I0522 18:33:06.481876  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.481880  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.481884  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.481888  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.481980  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:06.978658  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:06.978680  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.978691  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.978699  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.980836  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:06.980853  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.980860  160939 round_trippers.go:580]     Audit-Id: afbb292e-0ad0-4084-869c-e9ab1e1013e2
	I0522 18:33:06.980864  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.980867  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.980869  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.980871  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.980874  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.981047  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"400","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6336 chars]
	I0522 18:33:06.981449  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:06.981462  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:06.981468  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:06.981471  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:06.982978  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:06.983001  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:06.983007  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:06.983010  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:06.983014  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:06.983018  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:06.983021  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:06 GMT
	I0522 18:33:06.983024  160939 round_trippers.go:580]     Audit-Id: 5f3372bc-5c9a-49ce-8e2e-d96da0513d85
	I0522 18:33:06.983146  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.478352  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:33:07.478377  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.478384  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.478388  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.480498  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.480523  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.480531  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.480535  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.480540  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.480543  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.480546  160939 round_trippers.go:580]     Audit-Id: eb5f2654-4971-4578-bff8-10e4102baa23
	I0522 18:33:07.480550  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.480747  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:33:07.481177  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.481191  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.481197  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.481201  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.482856  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.482869  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.482876  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.482880  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.482882  160939 round_trippers.go:580]     Audit-Id: 8e36f69f-54f0-4e9d-a61f-f28960dbb847
	I0522 18:33:07.482885  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.482891  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.482896  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.483013  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.483304  160939 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.483324  160939 pod_ready.go:81] duration metric: took 1.005836965s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483334  160939 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.483386  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:33:07.483393  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.483399  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.483403  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.485055  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.485074  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.485080  160939 round_trippers.go:580]     Audit-Id: 36a9d3b1-5c0c-41cd-92e6-65aaf83162ed
	I0522 18:33:07.485084  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.485089  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.485093  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.485098  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.485102  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.485211  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:33:07.485525  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.485537  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.485544  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.485547  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.486957  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.486977  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.486984  160939 round_trippers.go:580]     Audit-Id: 4d183f34-de9b-40df-89b0-747f4b8d080a
	I0522 18:33:07.486991  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.486997  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.487008  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.487015  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.487019  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.487106  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.487417  160939 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.487433  160939 pod_ready.go:81] duration metric: took 4.091969ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487445  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.487498  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:33:07.487505  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.487511  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.487514  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.489030  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.489044  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.489060  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.489064  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.489068  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.489072  160939 round_trippers.go:580]     Audit-Id: 816d35e6-d77c-435e-912a-947f9c9ca4d7
	I0522 18:33:07.489075  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.489078  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.489182  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:33:07.489546  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.489558  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.489564  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.489568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.490910  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.490924  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.490930  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.490934  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.490937  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.490942  160939 round_trippers.go:580]     Audit-Id: 15a2ac49-01ac-4660-8380-560b4572c707
	I0522 18:33:07.490945  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.490949  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.491063  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.491412  160939 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.491430  160939 pod_ready.go:81] duration metric: took 3.978447ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491441  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.491501  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:33:07.491510  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.491520  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.491525  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.492901  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.492917  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.492936  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.492944  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.492949  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.492953  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.492958  160939 round_trippers.go:580]     Audit-Id: 599fa209-a829-4a91-9f16-72ec6e1a6954
	I0522 18:33:07.492961  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.493092  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:33:07.493557  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.493574  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.493584  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.493594  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.495001  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.495023  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.495032  160939 round_trippers.go:580]     Audit-Id: 451564e8-a844-4514-b8e9-ba808ecbe9d8
	I0522 18:33:07.495042  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.495047  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.495051  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.495057  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.495061  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.495200  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.495470  160939 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.495494  160939 pod_ready.go:81] duration metric: took 4.045749ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495507  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.495547  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:33:07.495553  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.495561  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.495568  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.497087  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.497100  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.497105  160939 round_trippers.go:580]     Audit-Id: 1fe00356-708f-49ce-b6e8-360006eb0d30
	I0522 18:33:07.497109  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.497114  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.497119  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.497123  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.497129  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.497236  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:33:07.671971  160939 request.go:629] Waited for 174.334017ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672035  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:07.672040  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.672048  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.672051  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.673738  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:07.673754  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.673762  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.673769  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.673773  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.673777  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.673781  160939 round_trippers.go:580]     Audit-Id: 72f84e56-248f-49c0-b60e-16c5fc7a3e8c
	I0522 18:33:07.673785  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.673915  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:07.674199  160939 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:07.674216  160939 pod_ready.go:81] duration metric: took 178.701037ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.674225  160939 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:07.871582  160939 request.go:629] Waited for 197.277518ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871632  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:33:07.871639  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:07.871646  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:07.871651  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:07.873675  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:07.873695  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:07.873702  160939 round_trippers.go:580]     Audit-Id: d0aea0c3-6995-4f17-9b3f-5c0b00c0a82e
	I0522 18:33:07.873707  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:07.873710  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:07.873714  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:07.873718  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:07.873721  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:07 GMT
	I0522 18:33:07.873885  160939 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:33:08.071516  160939 request.go:629] Waited for 197.279562ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071592  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:33:08.071600  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.071608  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.071612  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.073750  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.074093  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.074136  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.074152  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.074164  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.074178  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.074192  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.074205  160939 round_trippers.go:580]     Audit-Id: 9b07fddc-fd9a-4741-b67f-7bda2d392bdb
	I0522 18:33:08.074358  160939 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"411","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","fi [truncated 4832 chars]
	I0522 18:33:08.074852  160939 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:33:08.074892  160939 pod_ready.go:81] duration metric: took 400.659133ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:33:08.074912  160939 pod_ready.go:38] duration metric: took 16.114074117s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:33:08.074944  160939 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:33:08.075020  160939 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:33:08.085416  160939 command_runner.go:130] > 2247
	I0522 18:33:08.086205  160939 api_server.go:72] duration metric: took 16.337127031s to wait for apiserver process to appear ...
	I0522 18:33:08.086224  160939 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:33:08.086244  160939 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:33:08.090306  160939 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:33:08.090371  160939 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:33:08.090381  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.090392  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.090411  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.091107  160939 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:33:08.091121  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.091127  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.091130  160939 round_trippers.go:580]     Audit-Id: d9f416c6-963b-4b2c-9260-40a10a9a60da
	I0522 18:33:08.091133  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.091136  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.091138  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.091141  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.091144  160939 round_trippers.go:580]     Content-Length: 263
	I0522 18:33:08.091156  160939 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:33:08.091223  160939 api_server.go:141] control plane version: v1.30.1
	I0522 18:33:08.091237  160939 api_server.go:131] duration metric: took 5.007834ms to wait for apiserver health ...
	I0522 18:33:08.091244  160939 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:33:08.271652  160939 request.go:629] Waited for 180.311539ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271713  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.271719  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.271727  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.271732  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.282797  160939 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0522 18:33:08.282826  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.282835  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.282840  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.282847  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.282853  160939 round_trippers.go:580]     Audit-Id: abfdd3f0-3612-4cc0-9cb4-169b86afc2f2
	I0522 18:33:08.282857  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.282862  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.284550  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.287099  160939 system_pods.go:59] 8 kube-system pods found
	I0522 18:33:08.287133  160939 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.287139  160939 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.287143  160939 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.287148  160939 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.287156  160939 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.287161  160939 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.287170  160939 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.287175  160939 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.287184  160939 system_pods.go:74] duration metric: took 195.931068ms to wait for pod list to return data ...
	I0522 18:33:08.287199  160939 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:33:08.471518  160939 request.go:629] Waited for 184.244722ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471609  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:33:08.471620  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.471632  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.471638  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.473861  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.473879  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.473885  160939 round_trippers.go:580]     Audit-Id: 373a6323-7376-4ad7-973b-c7b9843fbc1e
	I0522 18:33:08.473889  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.473892  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.473895  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.473898  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.473902  160939 round_trippers.go:580]     Content-Length: 261
	I0522 18:33:08.473906  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.473926  160939 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"433"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:33:08.474181  160939 default_sa.go:45] found service account: "default"
	I0522 18:33:08.474221  160939 default_sa.go:55] duration metric: took 187.005275ms for default service account to be created ...
	I0522 18:33:08.474236  160939 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:33:08.671668  160939 request.go:629] Waited for 197.344631ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671731  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:33:08.671738  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.671747  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.671754  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.674660  160939 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:33:08.674693  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.674702  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.674707  160939 round_trippers.go:580]     Audit-Id: a86ce0e7-c7ca-4d9a-b3f4-5977392399ab
	I0522 18:33:08.674710  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.674715  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.674721  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.674726  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.675199  160939 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 57632 chars]
	I0522 18:33:08.677649  160939 system_pods.go:86] 8 kube-system pods found
	I0522 18:33:08.677676  160939 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running
	I0522 18:33:08.677682  160939 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running
	I0522 18:33:08.677689  160939 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running
	I0522 18:33:08.677700  160939 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running
	I0522 18:33:08.677712  160939 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running
	I0522 18:33:08.677718  160939 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running
	I0522 18:33:08.677728  160939 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running
	I0522 18:33:08.677736  160939 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:33:08.677746  160939 system_pods.go:126] duration metric: took 203.502619ms to wait for k8s-apps to be running ...
	I0522 18:33:08.677758  160939 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:33:08.677814  160939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:33:08.688253  160939 system_svc.go:56] duration metric: took 10.491535ms WaitForService to wait for kubelet
	I0522 18:33:08.688273  160939 kubeadm.go:576] duration metric: took 16.939194998s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:33:08.688296  160939 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:33:08.871835  160939 request.go:629] Waited for 183.471986ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871919  160939 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:33:08.871931  160939 round_trippers.go:469] Request Headers:
	I0522 18:33:08.871941  160939 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:33:08.871948  160939 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:33:08.873838  160939 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:33:08.873861  160939 round_trippers.go:577] Response Headers:
	I0522 18:33:08.873868  160939 round_trippers.go:580]     Content-Type: application/json
	I0522 18:33:08.873874  160939 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:33:08.873881  160939 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:33:08.873884  160939 round_trippers.go:580]     Date: Wed, 22 May 2024 18:33:08 GMT
	I0522 18:33:08.873888  160939 round_trippers.go:580]     Audit-Id: 58d6eaf2-6ad2-480d-a68d-b490633e56b2
	I0522 18:33:08.873893  160939 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:33:08.874043  160939 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"434"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"433","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 5061 chars]
	I0522 18:33:08.874388  160939 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:33:08.874407  160939 node_conditions.go:123] node cpu capacity is 8
	I0522 18:33:08.874418  160939 node_conditions.go:105] duration metric: took 186.116583ms to run NodePressure ...
	I0522 18:33:08.874431  160939 start.go:240] waiting for startup goroutines ...
	I0522 18:33:08.874437  160939 start.go:245] waiting for cluster config update ...
	I0522 18:33:08.874451  160939 start.go:254] writing updated cluster config ...
	I0522 18:33:08.876274  160939 out.go:177] 
	I0522 18:33:08.877676  160939 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:33:08.877789  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.879303  160939 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:33:08.880612  160939 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:33:08.881728  160939 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:33:08.882756  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:08.882774  160939 cache.go:56] Caching tarball of preloaded images
	I0522 18:33:08.882785  160939 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:33:08.882855  160939 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:33:08.882870  160939 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:33:08.882934  160939 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:33:08.898326  160939 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:33:08.898343  160939 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:33:08.898358  160939 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:33:08.898387  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:33:08.898479  160939 start.go:364] duration metric: took 72.592µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:33:08.898505  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:33:08.898623  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:33:08.900307  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:33:08.900408  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:33:08.900435  160939 client.go:168] LocalClient.Create starting
	I0522 18:33:08.900508  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:33:08.900541  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900564  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900623  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:33:08.900647  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:33:08.900668  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:33:08.900894  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:33:08.915750  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc001f32540 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:33:08.915790  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:33:08.915845  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:33:08.930295  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:33:08.945898  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:33:08.945964  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:33:09.453161  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:33:09.453202  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:33:09.453224  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:33:09.453289  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:33:13.570301  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.116968437s)
	I0522 18:33:13.570337  160939 kic.go:203] duration metric: took 4.117109757s to extract preloaded images to volume ...
	W0522 18:33:13.570466  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:33:13.570568  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:33:13.614931  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:33:13.883217  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:33:13.899745  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:13.916953  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:33:13.956223  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:33:13.956258  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:33:14.377830  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:33:14.377884  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:33:14.398081  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.414616  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:33:14.414636  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:33:14.454848  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:33:14.472868  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:33:14.472944  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.489872  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.490088  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.490103  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:33:14.602489  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.602516  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:33:14.602569  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.619132  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.619380  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.619398  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:33:14.740786  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:33:14.740854  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:33:14.756827  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:33:14.756995  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32902 <nil> <nil>}
	I0522 18:33:14.757012  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:33:14.867113  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:33:14.867142  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:33:14.867157  160939 ubuntu.go:177] setting up certificates
	I0522 18:33:14.867169  160939 provision.go:84] configureAuth start
	I0522 18:33:14.867230  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.882769  160939 provision.go:87] duration metric: took 15.590775ms to configureAuth
	W0522 18:33:14.882788  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.882814  160939 retry.go:31] will retry after 133.214µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.883930  160939 provision.go:84] configureAuth start
	I0522 18:33:14.883986  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.899452  160939 provision.go:87] duration metric: took 15.501642ms to configureAuth
	W0522 18:33:14.899474  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.899491  160939 retry.go:31] will retry after 108.916µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.900597  160939 provision.go:84] configureAuth start
	I0522 18:33:14.900654  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.915555  160939 provision.go:87] duration metric: took 14.940574ms to configureAuth
	W0522 18:33:14.915579  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.915597  160939 retry.go:31] will retry after 309.632µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.916706  160939 provision.go:84] configureAuth start
	I0522 18:33:14.916763  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.931974  160939 provision.go:87] duration metric: took 15.250688ms to configureAuth
	W0522 18:33:14.931998  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.932022  160939 retry.go:31] will retry after 318.322µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.933148  160939 provision.go:84] configureAuth start
	I0522 18:33:14.933214  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.948456  160939 provision.go:87] duration metric: took 15.28648ms to configureAuth
	W0522 18:33:14.948480  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.948498  160939 retry.go:31] will retry after 399.734µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.949641  160939 provision.go:84] configureAuth start
	I0522 18:33:14.949703  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.966281  160939 provision.go:87] duration metric: took 16.616876ms to configureAuth
	W0522 18:33:14.966304  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.966321  160939 retry.go:31] will retry after 408.958µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.967426  160939 provision.go:84] configureAuth start
	I0522 18:33:14.967490  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:14.983570  160939 provision.go:87] duration metric: took 16.124586ms to configureAuth
	W0522 18:33:14.983595  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.983618  160939 retry.go:31] will retry after 1.326072ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:14.985801  160939 provision.go:84] configureAuth start
	I0522 18:33:14.985868  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.000835  160939 provision.go:87] duration metric: took 15.012309ms to configureAuth
	W0522 18:33:15.000856  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.000876  160939 retry.go:31] will retry after 915.276µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.001989  160939 provision.go:84] configureAuth start
	I0522 18:33:15.002061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.016920  160939 provision.go:87] duration metric: took 14.912197ms to configureAuth
	W0522 18:33:15.016940  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.016956  160939 retry.go:31] will retry after 2.309554ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.020139  160939 provision.go:84] configureAuth start
	I0522 18:33:15.020206  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.035720  160939 provision.go:87] duration metric: took 15.563337ms to configureAuth
	W0522 18:33:15.035737  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.035758  160939 retry.go:31] will retry after 5.684682ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.041949  160939 provision.go:84] configureAuth start
	I0522 18:33:15.042023  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.057131  160939 provision.go:87] duration metric: took 15.161716ms to configureAuth
	W0522 18:33:15.057153  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.057173  160939 retry.go:31] will retry after 7.16749ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.065354  160939 provision.go:84] configureAuth start
	I0522 18:33:15.065419  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.080211  160939 provision.go:87] duration metric: took 14.836861ms to configureAuth
	W0522 18:33:15.080233  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.080253  160939 retry.go:31] will retry after 11.273171ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.092437  160939 provision.go:84] configureAuth start
	I0522 18:33:15.092522  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.107812  160939 provision.go:87] duration metric: took 15.35491ms to configureAuth
	W0522 18:33:15.107829  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.107845  160939 retry.go:31] will retry after 8.109728ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.117029  160939 provision.go:84] configureAuth start
	I0522 18:33:15.117103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.132558  160939 provision.go:87] duration metric: took 15.508983ms to configureAuth
	W0522 18:33:15.132577  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.132597  160939 retry.go:31] will retry after 10.345201ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.143792  160939 provision.go:84] configureAuth start
	I0522 18:33:15.143857  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.159011  160939 provision.go:87] duration metric: took 15.196792ms to configureAuth
	W0522 18:33:15.159034  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.159054  160939 retry.go:31] will retry after 30.499115ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.190240  160939 provision.go:84] configureAuth start
	I0522 18:33:15.190329  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.207177  160939 provision.go:87] duration metric: took 16.913741ms to configureAuth
	W0522 18:33:15.207195  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.207211  160939 retry.go:31] will retry after 63.879043ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.271445  160939 provision.go:84] configureAuth start
	I0522 18:33:15.271548  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.287528  160939 provision.go:87] duration metric: took 16.057048ms to configureAuth
	W0522 18:33:15.287550  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.287569  160939 retry.go:31] will retry after 67.853567ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.355802  160939 provision.go:84] configureAuth start
	I0522 18:33:15.355901  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.372258  160939 provision.go:87] duration metric: took 16.425467ms to configureAuth
	W0522 18:33:15.372281  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.372300  160939 retry.go:31] will retry after 129.065548ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.501513  160939 provision.go:84] configureAuth start
	I0522 18:33:15.501606  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.517774  160939 provision.go:87] duration metric: took 16.234544ms to configureAuth
	W0522 18:33:15.517792  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.517809  160939 retry.go:31] will retry after 177.855143ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.696167  160939 provision.go:84] configureAuth start
	I0522 18:33:15.696277  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:15.712184  160939 provision.go:87] duration metric: took 15.973904ms to configureAuth
	W0522 18:33:15.712203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.712222  160939 retry.go:31] will retry after 282.785493ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:15.995691  160939 provision.go:84] configureAuth start
	I0522 18:33:15.995782  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.011555  160939 provision.go:87] duration metric: took 15.836293ms to configureAuth
	W0522 18:33:16.011573  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.011590  160939 retry.go:31] will retry after 182.7986ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.194929  160939 provision.go:84] configureAuth start
	I0522 18:33:16.195022  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.210991  160939 provision.go:87] duration metric: took 16.035288ms to configureAuth
	W0522 18:33:16.211015  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.211031  160939 retry.go:31] will retry after 462.848752ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.674586  160939 provision.go:84] configureAuth start
	I0522 18:33:16.674669  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:16.691880  160939 provision.go:87] duration metric: took 17.266922ms to configureAuth
	W0522 18:33:16.691906  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:16.691924  160939 retry.go:31] will retry after 502.555206ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.194526  160939 provision.go:84] configureAuth start
	I0522 18:33:17.194646  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.210421  160939 provision.go:87] duration metric: took 15.865877ms to configureAuth
	W0522 18:33:17.210440  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.210460  160939 retry.go:31] will retry after 567.726401ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.779177  160939 provision.go:84] configureAuth start
	I0522 18:33:17.779290  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:17.795539  160939 provision.go:87] duration metric: took 16.336289ms to configureAuth
	W0522 18:33:17.795558  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:17.795575  160939 retry.go:31] will retry after 1.826878631s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.622720  160939 provision.go:84] configureAuth start
	I0522 18:33:19.622824  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:19.638518  160939 provision.go:87] duration metric: took 15.756609ms to configureAuth
	W0522 18:33:19.638535  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:19.638551  160939 retry.go:31] will retry after 1.924893574s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.564442  160939 provision.go:84] configureAuth start
	I0522 18:33:21.564544  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:21.580835  160939 provision.go:87] duration metric: took 16.362041ms to configureAuth
	W0522 18:33:21.580858  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:21.580874  160939 retry.go:31] will retry after 4.939303373s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.521956  160939 provision.go:84] configureAuth start
	I0522 18:33:26.522061  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:26.537982  160939 provision.go:87] duration metric: took 16.001203ms to configureAuth
	W0522 18:33:26.538004  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:26.538030  160939 retry.go:31] will retry after 3.636518909s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.175081  160939 provision.go:84] configureAuth start
	I0522 18:33:30.175184  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:30.191022  160939 provision.go:87] duration metric: took 15.915164ms to configureAuth
	W0522 18:33:30.191041  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:30.191058  160939 retry.go:31] will retry after 10.480093853s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.671328  160939 provision.go:84] configureAuth start
	I0522 18:33:40.671406  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:40.687409  160939 provision.go:87] duration metric: took 16.054951ms to configureAuth
	W0522 18:33:40.687427  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:40.687455  160939 retry.go:31] will retry after 15.937633407s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.627256  160939 provision.go:84] configureAuth start
	I0522 18:33:56.627376  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:33:56.643481  160939 provision.go:87] duration metric: took 16.179065ms to configureAuth
	W0522 18:33:56.643501  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:33:56.643521  160939 retry.go:31] will retry after 13.921044681s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.565323  160939 provision.go:84] configureAuth start
	I0522 18:34:10.565412  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:10.582184  160939 provision.go:87] duration metric: took 16.828213ms to configureAuth
	W0522 18:34:10.582203  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:10.582221  160939 retry.go:31] will retry after 29.913467421s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.496709  160939 provision.go:84] configureAuth start
	I0522 18:34:40.496791  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:40.512924  160939 provision.go:87] duration metric: took 16.185762ms to configureAuth
	W0522 18:34:40.512946  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512964  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:40.512971  160939 machine.go:97] duration metric: took 1m26.040084691s to provisionDockerMachine
	I0522 18:34:40.512977  160939 client.go:171] duration metric: took 1m31.612534317s to LocalClient.Create
	I0522 18:34:42.514189  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:34:42.514234  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:42.530404  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32902 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:34:42.611715  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:34:42.611789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:34:42.615669  160939 command_runner.go:130] > 214G
	I0522 18:34:42.615707  160939 start.go:128] duration metric: took 1m33.717073149s to createHost
	I0522 18:34:42.615722  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m33.717228717s
	W0522 18:34:42.615744  160939 start.go:713] error starting host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:42.616137  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:42.632434  160939 stop.go:39] StopHost: multinode-737786-m02
	W0522 18:34:42.632685  160939 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.634506  160939 out.go:177] * Stopping node "multinode-737786-m02"  ...
	I0522 18:34:42.635683  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	W0522 18:34:42.651010  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:42.652276  160939 out.go:177] * Powering off "multinode-737786-m02" via SSH ...
	I0522 18:34:42.653470  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	I0522 18:34:43.708767  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.725456  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:43.725497  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:43.725503  160939 stop.go:96] shutdown container: err=<nil>
	I0522 18:34:43.725538  160939 main.go:141] libmachine: Stopping "multinode-737786-m02"...
	I0522 18:34:43.725609  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:43.740494  160939 stop.go:66] stop err: Machine "multinode-737786-m02" is already stopped.
	I0522 18:34:43.740519  160939 stop.go:69] host is already stopped
	W0522 18:34:44.740739  160939 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0522 18:34:44.742589  160939 out.go:177] * Deleting "multinode-737786-m02" in docker ...
	I0522 18:34:44.743791  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	I0522 18:34:44.759917  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:44.775348  160939 cli_runner.go:164] Run: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0"
	W0522 18:34:44.791230  160939 cli_runner.go:211] docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0" returned with exit code 1
	I0522 18:34:44.791265  160939 oci.go:650] error shutdown multinode-737786-m02: docker exec --privileged -t multinode-737786-m02 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 2dc5a71c55c9ef5d6ad1baa728c2ff15efe34f377c26beee83af68ffc394ce01 is not running
	I0522 18:34:45.792215  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:45.808448  160939 oci.go:658] container multinode-737786-m02 status is Stopped
	I0522 18:34:45.808478  160939 oci.go:670] Successfully shutdown container multinode-737786-m02
	I0522 18:34:45.808522  160939 cli_runner.go:164] Run: docker rm -f -v multinode-737786-m02
	I0522 18:34:45.828241  160939 cli_runner.go:164] Run: docker container inspect -f {{.Id}} multinode-737786-m02
	W0522 18:34:45.843001  160939 cli_runner.go:211] docker container inspect -f {{.Id}} multinode-737786-m02 returned with exit code 1
	I0522 18:34:45.843068  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:45.858067  160939 cli_runner.go:164] Run: docker network rm multinode-737786
	W0522 18:34:45.872863  160939 cli_runner.go:211] docker network rm multinode-737786 returned with exit code 1
	W0522 18:34:45.872955  160939 kic.go:390] failed to remove network (which might be okay) multinode-737786: unable to delete a network that is attached to a running container
	W0522 18:34:45.873163  160939 out.go:239] ! StartHost failed, but will try again: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:45.873175  160939 start.go:728] Will try again in 5 seconds ...
	I0522 18:34:50.874261  160939 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:34:50.874388  160939 start.go:364] duration metric: took 68.497µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:34:50.874412  160939 start.go:93] Provisioning new machine with config: &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p
2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0522 18:34:50.874486  160939 start.go:125] createHost starting for "m02" (driver="docker")
	I0522 18:34:50.876407  160939 out.go:204] * Creating docker container (CPUs=2, Memory=2200MB) ...
	I0522 18:34:50.876543  160939 start.go:159] libmachine.API.Create for "multinode-737786" (driver="docker")
	I0522 18:34:50.876576  160939 client.go:168] LocalClient.Create starting
	I0522 18:34:50.876662  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem
	I0522 18:34:50.876712  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876732  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.876835  160939 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem
	I0522 18:34:50.876869  160939 main.go:141] libmachine: Decoding PEM data...
	I0522 18:34:50.876890  160939 main.go:141] libmachine: Parsing certificate...
	I0522 18:34:50.877138  160939 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:34:50.893470  160939 network_create.go:77] Found existing network {name:multinode-737786 subnet:0xc0009258c0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 67 1] mtu:1500}
	I0522 18:34:50.893509  160939 kic.go:121] calculated static IP "192.168.67.3" for the "multinode-737786-m02" container
	I0522 18:34:50.893558  160939 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0522 18:34:50.909079  160939 cli_runner.go:164] Run: docker volume create multinode-737786-m02 --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true
	I0522 18:34:50.925444  160939 oci.go:103] Successfully created a docker volume multinode-737786-m02
	I0522 18:34:50.925538  160939 cli_runner.go:164] Run: docker run --rm --name multinode-737786-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --entrypoint /usr/bin/test -v multinode-737786-m02:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -d /var/lib
	I0522 18:34:51.321868  160939 oci.go:107] Successfully prepared a docker volume multinode-737786-m02
	I0522 18:34:51.321909  160939 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:34:51.321928  160939 kic.go:194] Starting extracting preloaded images to volume ...
	I0522 18:34:51.321980  160939 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir
	I0522 18:34:55.613221  160939 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v multinode-737786-m02:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a -I lz4 -xf /preloaded.tar -C /extractDir: (4.291204502s)
	I0522 18:34:55.613251  160939 kic.go:203] duration metric: took 4.291320169s to extract preloaded images to volume ...
	W0522 18:34:55.613360  160939 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0522 18:34:55.613435  160939 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0522 18:34:55.658317  160939 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname multinode-737786-m02 --name multinode-737786-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=multinode-737786-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=multinode-737786-m02 --network multinode-737786 --ip 192.168.67.3 --volume multinode-737786-m02:/var --security-opt apparmor=unconfined --memory=2200mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a
	I0522 18:34:55.924047  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Running}}
	I0522 18:34:55.941247  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:55.958588  160939 cli_runner.go:164] Run: docker exec multinode-737786-m02 stat /var/lib/dpkg/alternatives/iptables
	I0522 18:34:56.004446  160939 oci.go:144] the created container "multinode-737786-m02" has a running status.
	I0522 18:34:56.004476  160939 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa...
	I0522 18:34:56.219497  160939 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0522 18:34:56.219536  160939 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0522 18:34:56.240489  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.268881  160939 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0522 18:34:56.268907  160939 kic_runner.go:114] Args: [docker exec --privileged multinode-737786-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0522 18:34:56.353114  160939 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:34:56.375972  160939 machine.go:94] provisionDockerMachine start ...
	I0522 18:34:56.376058  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.395706  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.395915  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.395934  160939 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:34:56.554445  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.554477  160939 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:34:56.554533  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.573230  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.573401  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.573414  160939 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:34:56.702163  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:34:56.702242  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:34:56.718029  160939 main.go:141] libmachine: Using SSH client type: native
	I0522 18:34:56.718187  160939 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32907 <nil> <nil>}
	I0522 18:34:56.718204  160939 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:34:56.830876  160939 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:34:56.830907  160939 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:34:56.830922  160939 ubuntu.go:177] setting up certificates
	I0522 18:34:56.830931  160939 provision.go:84] configureAuth start
	I0522 18:34:56.830976  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.846805  160939 provision.go:87] duration metric: took 15.865379ms to configureAuth
	W0522 18:34:56.846831  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.846851  160939 retry.go:31] will retry after 140.64µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.847967  160939 provision.go:84] configureAuth start
	I0522 18:34:56.848042  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.862744  160939 provision.go:87] duration metric: took 14.756628ms to configureAuth
	W0522 18:34:56.862761  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.862777  160939 retry.go:31] will retry after 137.24µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.863887  160939 provision.go:84] configureAuth start
	I0522 18:34:56.863944  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.878368  160939 provision.go:87] duration metric: took 14.464443ms to configureAuth
	W0522 18:34:56.878383  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.878401  160939 retry.go:31] will retry after 307.999µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.879516  160939 provision.go:84] configureAuth start
	I0522 18:34:56.879573  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.894089  160939 provision.go:87] duration metric: took 14.555182ms to configureAuth
	W0522 18:34:56.894104  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.894119  160939 retry.go:31] will retry after 344.81µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.895224  160939 provision.go:84] configureAuth start
	I0522 18:34:56.895305  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.909660  160939 provision.go:87] duration metric: took 14.420335ms to configureAuth
	W0522 18:34:56.909677  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.909697  160939 retry.go:31] will retry after 721.739µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.910804  160939 provision.go:84] configureAuth start
	I0522 18:34:56.910856  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.925678  160939 provision.go:87] duration metric: took 14.857697ms to configureAuth
	W0522 18:34:56.925695  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.925714  160939 retry.go:31] will retry after 381.6µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.926834  160939 provision.go:84] configureAuth start
	I0522 18:34:56.926886  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.941681  160939 provision.go:87] duration metric: took 14.831201ms to configureAuth
	W0522 18:34:56.941702  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.941722  160939 retry.go:31] will retry after 897.088µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.942836  160939 provision.go:84] configureAuth start
	I0522 18:34:56.942908  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.957491  160939 provision.go:87] duration metric: took 14.636033ms to configureAuth
	W0522 18:34:56.957512  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.957529  160939 retry.go:31] will retry after 1.800181ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.959714  160939 provision.go:84] configureAuth start
	I0522 18:34:56.959790  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.976307  160939 provision.go:87] duration metric: took 16.571335ms to configureAuth
	W0522 18:34:56.976326  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.976342  160939 retry.go:31] will retry after 2.324455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.979479  160939 provision.go:84] configureAuth start
	I0522 18:34:56.979532  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:56.994677  160939 provision.go:87] duration metric: took 15.180277ms to configureAuth
	W0522 18:34:56.994693  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.994709  160939 retry.go:31] will retry after 3.105759ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:56.998893  160939 provision.go:84] configureAuth start
	I0522 18:34:56.998946  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.014214  160939 provision.go:87] duration metric: took 15.303755ms to configureAuth
	W0522 18:34:57.014235  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.014254  160939 retry.go:31] will retry after 5.839455ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.020445  160939 provision.go:84] configureAuth start
	I0522 18:34:57.020525  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.035868  160939 provision.go:87] duration metric: took 15.4048ms to configureAuth
	W0522 18:34:57.035886  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.035903  160939 retry.go:31] will retry after 5.406932ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.042088  160939 provision.go:84] configureAuth start
	I0522 18:34:57.042156  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.058449  160939 provision.go:87] duration metric: took 16.342041ms to configureAuth
	W0522 18:34:57.058472  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.058492  160939 retry.go:31] will retry after 11.838168ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.070675  160939 provision.go:84] configureAuth start
	I0522 18:34:57.070741  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.085470  160939 provision.go:87] duration metric: took 14.777244ms to configureAuth
	W0522 18:34:57.085486  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.085502  160939 retry.go:31] will retry after 23.959822ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.109694  160939 provision.go:84] configureAuth start
	I0522 18:34:57.109776  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.124985  160939 provision.go:87] duration metric: took 15.261358ms to configureAuth
	W0522 18:34:57.125000  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.125016  160939 retry.go:31] will retry after 27.869578ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.153221  160939 provision.go:84] configureAuth start
	I0522 18:34:57.153307  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.169108  160939 provision.go:87] duration metric: took 15.85438ms to configureAuth
	W0522 18:34:57.169127  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.169146  160939 retry.go:31] will retry after 51.257536ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.221342  160939 provision.go:84] configureAuth start
	I0522 18:34:57.221408  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.237003  160939 provision.go:87] duration metric: took 15.637311ms to configureAuth
	W0522 18:34:57.237024  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.237043  160939 retry.go:31] will retry after 39.576908ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.277194  160939 provision.go:84] configureAuth start
	I0522 18:34:57.277272  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.292521  160939 provision.go:87] duration metric: took 15.297184ms to configureAuth
	W0522 18:34:57.292539  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.292557  160939 retry.go:31] will retry after 99.452062ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.392811  160939 provision.go:84] configureAuth start
	I0522 18:34:57.392913  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.410711  160939 provision.go:87] duration metric: took 17.84636ms to configureAuth
	W0522 18:34:57.410765  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.410815  160939 retry.go:31] will retry after 143.960372ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.555133  160939 provision.go:84] configureAuth start
	I0522 18:34:57.555208  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.571320  160939 provision.go:87] duration metric: took 16.160526ms to configureAuth
	W0522 18:34:57.571343  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.571360  160939 retry.go:31] will retry after 155.348601ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.727681  160939 provision.go:84] configureAuth start
	I0522 18:34:57.727762  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:57.743313  160939 provision.go:87] duration metric: took 15.603694ms to configureAuth
	W0522 18:34:57.743335  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:57.743351  160939 retry.go:31] will retry after 378.804808ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.122902  160939 provision.go:84] configureAuth start
	I0522 18:34:58.123010  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.139688  160939 provision.go:87] duration metric: took 16.744877ms to configureAuth
	W0522 18:34:58.139707  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.139724  160939 retry.go:31] will retry after 334.927027ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.475218  160939 provision.go:84] configureAuth start
	I0522 18:34:58.475348  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.491224  160939 provision.go:87] duration metric: took 15.959288ms to configureAuth
	W0522 18:34:58.491241  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.491258  160939 retry.go:31] will retry after 382.857061ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.874898  160939 provision.go:84] configureAuth start
	I0522 18:34:58.875006  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:58.891400  160939 provision.go:87] duration metric: took 16.476022ms to configureAuth
	W0522 18:34:58.891425  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:58.891445  160939 retry.go:31] will retry after 908.607112ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.800452  160939 provision.go:84] configureAuth start
	I0522 18:34:59.800565  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:34:59.817521  160939 provision.go:87] duration metric: took 17.040678ms to configureAuth
	W0522 18:34:59.817541  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:34:59.817559  160939 retry.go:31] will retry after 2.399990762s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.218011  160939 provision.go:84] configureAuth start
	I0522 18:35:02.218103  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:02.233382  160939 provision.go:87] duration metric: took 15.343422ms to configureAuth
	W0522 18:35:02.233400  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:02.233417  160939 retry.go:31] will retry after 3.631413751s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.866094  160939 provision.go:84] configureAuth start
	I0522 18:35:05.866192  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:05.883038  160939 provision.go:87] duration metric: took 16.913162ms to configureAuth
	W0522 18:35:05.883057  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:05.883075  160939 retry.go:31] will retry after 4.401726343s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.285941  160939 provision.go:84] configureAuth start
	I0522 18:35:10.286047  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:10.303158  160939 provision.go:87] duration metric: took 17.185304ms to configureAuth
	W0522 18:35:10.303178  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:10.303195  160939 retry.go:31] will retry after 5.499851087s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.803345  160939 provision.go:84] configureAuth start
	I0522 18:35:15.803456  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:15.820047  160939 provision.go:87] duration metric: took 16.668915ms to configureAuth
	W0522 18:35:15.820069  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:15.820088  160939 retry.go:31] will retry after 6.21478213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.035749  160939 provision.go:84] configureAuth start
	I0522 18:35:22.035888  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:22.052346  160939 provision.go:87] duration metric: took 16.569923ms to configureAuth
	W0522 18:35:22.052365  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:22.052383  160939 retry.go:31] will retry after 10.717404274s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.770612  160939 provision.go:84] configureAuth start
	I0522 18:35:32.770702  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:32.786847  160939 provision.go:87] duration metric: took 16.20902ms to configureAuth
	W0522 18:35:32.786866  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:32.786882  160939 retry.go:31] will retry after 26.374349839s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.162251  160939 provision.go:84] configureAuth start
	I0522 18:35:59.162338  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:35:59.177866  160939 provision.go:87] duration metric: took 15.590678ms to configureAuth
	W0522 18:35:59.177883  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:35:59.177900  160939 retry.go:31] will retry after 23.779194983s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.957560  160939 provision.go:84] configureAuth start
	I0522 18:36:22.957642  160939 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:36:22.973473  160939 provision.go:87] duration metric: took 15.882846ms to configureAuth
	W0522 18:36:22.973490  160939 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973508  160939 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:22.973514  160939 machine.go:97] duration metric: took 1m26.59751999s to provisionDockerMachine
	I0522 18:36:22.973521  160939 client.go:171] duration metric: took 1m32.0969361s to LocalClient.Create
	I0522 18:36:24.974123  160939 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:36:24.974170  160939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:36:24.990325  160939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32907 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:36:25.071724  160939 command_runner.go:130] > 27%!
	(MISSING)I0522 18:36:25.071789  160939 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:36:25.075456  160939 command_runner.go:130] > 214G
	I0522 18:36:25.075742  160939 start.go:128] duration metric: took 1m34.201241799s to createHost
	I0522 18:36:25.075767  160939 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m34.20136546s
	W0522 18:36:25.075854  160939 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:36:25.077767  160939 out.go:177] 
	W0522 18:36:25.079095  160939 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: creating host: create: provisioning: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:36:25.079109  160939 out.go:239] * 
	W0522 18:36:25.079919  160939 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:36:25.081455  160939 out.go:177] 
	
	
	==> Docker <==
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:48:14 multinode-737786 dockerd[1210]: 2024/05/22 18:48:14 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:21 multinode-737786 dockerd[1210]: 2024/05/22 18:52:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:22 multinode-737786 dockerd[1210]: 2024/05/22 18:52:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:52:33 multinode-737786 dockerd[1210]: 2024/05/22 18:52:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         22 minutes ago      Running             coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              22 minutes ago      Running             kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	16cb7c11afec8       6e38f40d628db                                                                                         22 minutes ago      Running             storage-provisioner       0                   27a641da2a092       storage-provisioner
	b73d925361c05       cbb01a7bd410d                                                                                         22 minutes ago      Exited              coredns                   0                   6711c2a968d71       coredns-7db6d8ff4d-jhsz9
	4394527287d9e       747097150317f                                                                                         22 minutes ago      Running             kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         22 minutes ago      Running             kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         22 minutes ago      Running             etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         22 minutes ago      Running             kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         22 minutes ago      Running             kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	
	
	==> coredns [b73d925361c0] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	[ERROR] plugin/errors: 2 6378142707441429934.7718871847752614605. HINFO: dial udp 192.168.67.1:53: connect: network is unreachable
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:55:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:52:01 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 796df425fb994719a2b6ac89f60c2334
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     22m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      22m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22m   kube-proxy       
	  Normal  Starting                 22m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  22m   kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m   kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m   kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  22m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           22m   node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.006301] FS-Cache: N-cookie c=0000000d [p=00000003 fl=2 nc=0 na=1]
	[  +0.007962] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000090b4195c
	[  +0.008724] FS-Cache: N-key=[8] '0490130200000000'
	[  +2.340067] FS-Cache: Duplicate cookie detected
	[  +0.004679] FS-Cache: O-cookie c=00000006 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006760] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000003060a4be
	[  +0.007535] FS-Cache: O-key=[8] '0390130200000000'
	[  +0.004927] FS-Cache: N-cookie c=0000000f [p=00000003 fl=2 nc=0 na=1]
	[  +0.006668] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=0000000075807f04
	[  +0.008768] FS-Cache: N-key=[8] '0390130200000000'
	[  +0.243815] FS-Cache: Duplicate cookie detected
	[  +0.004693] FS-Cache: O-cookie c=00000009 [p=00000003 fl=226 nc=0 na=1]
	[  +0.006773] FS-Cache: O-cookie d=000000000014cea0{9p.inode} n=000000000947cc97
	[  +0.007354] FS-Cache: O-key=[8] '0690130200000000'
	[  +0.004922] FS-Cache: N-cookie c=00000010 [p=00000003 fl=2 nc=0 na=1]
	[  +0.006590] FS-Cache: N-cookie d=000000000014cea0{9p.inode} n=00000000dd977380
	[  +0.008723] FS-Cache: N-key=[8] '0690130200000000'
	[  +4.941227] IPv4: martian source 10.244.0.1 from 10.244.0.13, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 52 3f ff d3 2d 32 08 06
	[May22 18:20] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 88 87 ea 82 8c 08 06
	[  +0.002367] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 52 1a b3 ac 14 45 08 06
	[May22 18:31] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 1a 89 e2 0f b2 b8 08 06
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.364428Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364435Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364447Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.364467Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:32:33.365643Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.365639Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:32:33.365646Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.365693Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:32:33.36588Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.365903Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	{"level":"info","ts":"2024-05-22T18:52:33.678754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1153}
	{"level":"info","ts":"2024-05-22T18:52:33.681122Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1153,"took":"2.100554ms","hash":435437424,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:52:33.681165Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435437424,"revision":1153,"compact-revision":911}
	
	
	==> kernel <==
	 18:55:16 up  1:37,  0 users,  load average: 0.20, 0.29, 0.31
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:53:06.845777       1 main.go:227] handling current node
	I0522 18:53:16.849038       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:16.849062       1 main.go:227] handling current node
	I0522 18:53:26.852270       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:26.852292       1 main.go:227] handling current node
	I0522 18:53:36.861628       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:36.861651       1 main.go:227] handling current node
	I0522 18:53:46.865179       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:46.865201       1 main.go:227] handling current node
	I0522 18:53:56.868146       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:56.868167       1 main.go:227] handling current node
	I0522 18:54:06.871251       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:06.871301       1 main.go:227] handling current node
	I0522 18:54:16.877176       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:16.877198       1 main.go:227] handling current node
	I0522 18:54:26.880323       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:26.880354       1 main.go:227] handling current node
	I0522 18:54:36.882866       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:36.882888       1 main.go:227] handling current node
	I0522 18:54:46.886203       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:46.886223       1 main.go:227] handling current node
	I0522 18:54:56.888938       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:56.888961       1 main.go:227] handling current node
	I0522 18:55:06.893856       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:06.893878       1 main.go:227] handling current node
	
	
	==> kube-apiserver [6991b35c6800] <==
	I0522 18:32:35.449798       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:32:35.453291       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:32:35.453308       1 policy_source.go:224] refreshing policies
	I0522 18:32:35.468422       1 controller.go:615] quota admission added evaluator for: namespaces
	I0522 18:32:35.648097       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:32:36.270908       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0522 18:32:36.276360       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0522 18:32:36.276373       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:32:36.650126       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0522 18:32:36.683129       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0522 18:32:36.777692       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0522 18:32:36.791941       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.67.2]
	I0522 18:32:36.793832       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:32:36.798754       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0522 18:32:37.359568       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0522 18:32:37.803958       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0522 18:32:37.812834       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0522 18:32:37.819384       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0522 18:32:51.513861       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0522 18:32:51.614880       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0522 18:48:10.913684       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57644: use of closed network connection
	E0522 18:48:11.175047       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57696: use of closed network connection
	E0522 18:48:11.423032       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57770: use of closed network connection
	E0522 18:48:13.525053       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57842: use of closed network connection
	E0522 18:48:13.672815       1 conn.go:339] Error on socket receive: read tcp 192.168.67.2:8443->192.168.67.1:57864: use of closed network connection
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377607       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:35.377344       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.252907    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.988563    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fhhmr" podStartSLOduration=2.9885258439999998 podStartE2EDuration="2.988525844s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.988079663 +0000 UTC m=+16.414649501" watchObservedRunningTime="2024-05-22 18:32:53.988525844 +0000 UTC m=+16.415095679"
	May 22 18:32:53 multinode-737786 kubelet[2370]: I0522 18:32:53.995975    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=0.995953678 podStartE2EDuration="995.953678ms" podCreationTimestamp="2024-05-22 18:32:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:53.995721962 +0000 UTC m=+16.422291803" watchObservedRunningTime="2024-05-22 18:32:53.995953678 +0000 UTC m=+16.422523513"
	May 22 18:32:54 multinode-737786 kubelet[2370]: I0522 18:32:54.011952    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-jhsz9" podStartSLOduration=3.011934656 podStartE2EDuration="3.011934656s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-22 18:32:54.011824217 +0000 UTC m=+16.438394051" watchObservedRunningTime="2024-05-22 18:32:54.011934656 +0000 UTC m=+16.438504490"
	May 22 18:32:56 multinode-737786 kubelet[2370]: I0522 18:32:56.027149    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qpfbl" podStartSLOduration=2.150242403 podStartE2EDuration="5.027130161s" podCreationTimestamp="2024-05-22 18:32:51 +0000 UTC" firstStartedPulling="2024-05-22 18:32:52.549285586 +0000 UTC m=+14.975855404" lastFinishedPulling="2024-05-22 18:32:55.426173334 +0000 UTC m=+17.852743162" observedRunningTime="2024-05-22 18:32:56.026868759 +0000 UTC m=+18.453438592" watchObservedRunningTime="2024-05-22 18:32:56.027130161 +0000 UTC m=+18.453699994"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.024575    2370 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	May 22 18:32:58 multinode-737786 kubelet[2370]: I0522 18:32:58.025200    2370 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467011    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467063    2370 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") pod \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\" (UID: \"be9eeea7-ca23-4606-8965-0eb7a95e4a0d\") "
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.467471    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume" (OuterVolumeSpecName: "config-volume") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.469105    2370 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9" (OuterVolumeSpecName: "kube-api-access-44bz9") pod "be9eeea7-ca23-4606-8965-0eb7a95e4a0d" (UID: "be9eeea7-ca23-4606-8965-0eb7a95e4a0d"). InnerVolumeSpecName "kube-api-access-44bz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567723    2370 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-44bz9\" (UniqueName: \"kubernetes.io/projected/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-kube-api-access-44bz9\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:06 multinode-737786 kubelet[2370]: I0522 18:33:06.567767    2370 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/be9eeea7-ca23-4606-8965-0eb7a95e4a0d-config-volume\") on node \"multinode-737786\" DevicePath \"\""
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.104709    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.116635    2370 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6711c2a968d71ed296f2c5ec32fcc2c4af987442a6cd05a769a893a98d12df90"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.118819    2370 scope.go:117] "RemoveContainer" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: E0522 18:33:07.119523    2370 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de" containerID="ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.119568    2370 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"} err="failed to get container status \"ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de\": rpc error: code = Unknown desc = Error response from daemon: No such container: ab4423bae1876f659323df5fd86b2a2269054a9c1ce6df9d5197d1a5020639de"
	May 22 18:33:07 multinode-737786 kubelet[2370]: I0522 18:33:07.656301    2370 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" path="/var/lib/kubelet/pods/be9eeea7-ca23-4606-8965-0eb7a95e4a0d/volumes"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113341    2370 topology_manager.go:215] "Topology Admit Handler" podUID="3cb1c926-1ddd-432d-bfae-23cc2cf1d67e" podNamespace="default" podName="busybox-fc5497c4f-7zbr8"
	May 22 18:36:27 multinode-737786 kubelet[2370]: E0522 18:36:27.113441    2370 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.113480    2370 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9eeea7-ca23-4606-8965-0eb7a95e4a0d" containerName="coredns"
	May 22 18:36:27 multinode-737786 kubelet[2370]: I0522 18:36:27.310549    2370 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt2v4\" (UniqueName: \"kubernetes.io/projected/3cb1c926-1ddd-432d-bfae-23cc2cf1d67e-kube-api-access-bt2v4\") pod \"busybox-fc5497c4f-7zbr8\" (UID: \"3cb1c926-1ddd-432d-bfae-23cc2cf1d67e\") " pod="default/busybox-fc5497c4f-7zbr8"
	May 22 18:36:30 multinode-737786 kubelet[2370]: I0522 18:36:30.199164    2370 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-7zbr8" podStartSLOduration=1.5746006019999998 podStartE2EDuration="3.199142439s" podCreationTimestamp="2024-05-22 18:36:27 +0000 UTC" firstStartedPulling="2024-05-22 18:36:27.886226491 +0000 UTC m=+230.312796315" lastFinishedPulling="2024-05-22 18:36:29.510768323 +0000 UTC m=+231.937338152" observedRunningTime="2024-05-22 18:36:30.198865287 +0000 UTC m=+232.625435120" watchObservedRunningTime="2024-05-22 18:36:30.199142439 +0000 UTC m=+232.625712274"
	May 22 18:48:11 multinode-737786 kubelet[2370]: E0522 18:48:11.423039    2370 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp [::1]:55084->[::1]:43097: write tcp [::1]:55084->[::1]:43097: write: broken pipe
	
	
	==> storage-provisioner [16cb7c11afec] <==
	I0522 18:32:53.558799       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:32:53.565899       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:32:53.565955       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:32:53.572167       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:32:53.572280       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	I0522 18:32:53.573084       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef became leader
	I0522 18:32:53.672834       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11a3ad44-b3a8-4c71-a29a-66f0773632ef!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/StartAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  3m38s (x4 over 18m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/StartAfterStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StartAfterStop (162.58s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (137.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-737786
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-737786
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-737786: (12.894504684s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-737786 --wait=true -v=8 --alsologtostderr
E0522 18:56:55.310538   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:57:24.838399   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-737786 --wait=true -v=8 --alsologtostderr: exit status 80 (2m2.828846146s)

                                                
                                                
-- stdout --
	* [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "multinode-737786" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "multinode-737786-m02" ...
	* Updating the running docker "multinode-737786-m02" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:55:30.016582  191271 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:55:30.016705  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016722  191271 out.go:304] Setting ErrFile to fd 2...
	I0522 18:55:30.016730  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016907  191271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:55:30.017442  191271 out.go:298] Setting JSON to false
	I0522 18:55:30.018352  191271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5874,"bootTime":1716398256,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:55:30.018407  191271 start.go:139] virtualization: kvm guest
	I0522 18:55:30.020609  191271 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:55:30.022032  191271 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:55:30.023205  191271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:55:30.022039  191271 notify.go:220] Checking for updates...
	I0522 18:55:30.024646  191271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:30.025941  191271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:55:30.027248  191271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:55:30.028476  191271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:55:30.030067  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:30.030140  191271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:55:30.051240  191271 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:55:30.051370  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.102381  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.093628495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.102490  191271 docker.go:295] overlay module found
	I0522 18:55:30.104504  191271 out.go:177] * Using the docker driver based on existing profile
	I0522 18:55:30.105610  191271 start.go:297] selected driver: docker
	I0522 18:55:30.105625  191271 start.go:901] validating driver "docker" against &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.105706  191271 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:55:30.105775  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.148150  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.139765007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.149022  191271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:30.149059  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:30.149071  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:30.149133  191271 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.151019  191271 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:55:30.152138  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:30.153345  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:30.154404  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:30.154431  191271 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:55:30.154440  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:30.154497  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:30.154509  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:30.154516  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:30.154599  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.169685  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:30.169705  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:30.169727  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:30.169758  191271 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:30.169840  191271 start.go:364] duration metric: took 44.168µs to acquireMachinesLock for "multinode-737786"
	I0522 18:55:30.169862  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:30.169876  191271 fix.go:54] fixHost starting: 
	I0522 18:55:30.170113  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.186497  191271 fix.go:112] recreateIfNeeded on multinode-737786: state=Stopped err=<nil>
	W0522 18:55:30.186530  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:30.188329  191271 out.go:177] * Restarting existing docker container for "multinode-737786" ...
	I0522 18:55:30.189575  191271 cli_runner.go:164] Run: docker start multinode-737786
	I0522 18:55:30.434280  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.450599  191271 kic.go:430] container "multinode-737786" state is running.
	I0522 18:55:30.450960  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:30.469222  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.469408  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:30.469451  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:30.486145  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:30.486342  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:30.486358  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:30.486939  191271 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33870->127.0.0.1:32927: read: connection reset by peer
	I0522 18:55:33.598615  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.598642  191271 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:55:33.598705  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.616028  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.616267  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.616289  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:55:33.737498  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.737589  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.753768  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.753939  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.753956  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:33.862867  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:33.862895  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:33.862922  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:33.862933  191271 provision.go:84] configureAuth start
	I0522 18:55:33.862986  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:33.879102  191271 provision.go:143] copyHostCerts
	I0522 18:55:33.879142  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879166  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:55:33.879178  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879240  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:55:33.879346  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879366  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:55:33.879370  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879398  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:55:33.879456  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879472  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:55:33.879476  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879500  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:55:33.879560  191271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:55:33.981006  191271 provision.go:177] copyRemoteCerts
	I0522 18:55:33.981066  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:55:33.981098  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.997545  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.083209  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:55:34.083291  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:55:34.103441  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:55:34.103506  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:55:34.123440  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:55:34.123484  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:55:34.142960  191271 provision.go:87] duration metric: took 280.016987ms to configureAuth
	I0522 18:55:34.142986  191271 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:55:34.143149  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:34.143191  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.159108  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.159288  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.159303  191271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:55:34.271284  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:55:34.271307  191271 ubuntu.go:71] root file system type: overlay
	I0522 18:55:34.271413  191271 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:55:34.271478  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.287895  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.288060  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.288123  191271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:55:34.412978  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:55:34.413065  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.429426  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.429609  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.429634  191271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:55:34.543660  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:34.543688  191271 machine.go:97] duration metric: took 4.074267152s to provisionDockerMachine
	I0522 18:55:34.543701  191271 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:55:34.543714  191271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:55:34.543786  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:55:34.543829  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.560130  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.642945  191271 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:55:34.645547  191271 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:55:34.645562  191271 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:55:34.645568  191271 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:55:34.645579  191271 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:55:34.645586  191271 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:55:34.645590  191271 command_runner.go:130] > ID=ubuntu
	I0522 18:55:34.645594  191271 command_runner.go:130] > ID_LIKE=debian
	I0522 18:55:34.645599  191271 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:55:34.645603  191271 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:55:34.645609  191271 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:55:34.645615  191271 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:55:34.645619  191271 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:55:34.645674  191271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:55:34.645696  191271 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:55:34.645706  191271 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:55:34.645714  191271 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:55:34.645725  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:55:34.645767  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:55:34.645841  191271 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:55:34.645853  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:55:34.645929  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:55:34.653086  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:34.672745  191271 start.go:296] duration metric: took 129.030542ms for postStartSetup
	I0522 18:55:34.672809  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:34.672852  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.688507  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.767346  191271 command_runner.go:130] > 27%
	I0522 18:55:34.767631  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:34.771441  191271 command_runner.go:130] > 213G
	I0522 18:55:34.771575  191271 fix.go:56] duration metric: took 4.601701145s for fixHost
	I0522 18:55:34.771595  191271 start.go:83] releasing machines lock for "multinode-737786", held for 4.601740929s
	I0522 18:55:34.771653  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:34.787192  191271 ssh_runner.go:195] Run: cat /version.json
	I0522 18:55:34.787232  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.787317  191271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:55:34.787371  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.803468  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.803975  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.962314  191271 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:55:34.964188  191271 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:55:34.964307  191271 ssh_runner.go:195] Run: systemctl --version
	I0522 18:55:34.968188  191271 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:55:34.968212  191271 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:55:34.968386  191271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:55:34.972176  191271 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:55:34.972197  191271 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:34.972207  191271 command_runner.go:130] > Device: 37h/55d	Inode: 1306969     Links: 1
	I0522 18:55:34.972215  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:34.972234  191271 command_runner.go:130] > Access: 2024-05-22 18:32:26.662663204 +0000
	I0522 18:55:34.972243  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972251  191271 command_runner.go:130] > Change: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972259  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972314  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:55:34.987621  191271 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:55:34.987680  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:55:34.994995  191271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:55:34.995017  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:34.995044  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:34.995149  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.008015  191271 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:55:35.008981  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:55:35.017393  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:55:35.027698  191271 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.027743  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:55:35.036084  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.044052  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:55:35.052258  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.060384  191271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:55:35.067811  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:55:35.075774  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:55:35.083880  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:55:35.091876  191271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:55:35.098619  191271 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:55:35.098662  191271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:55:35.105547  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.177710  191271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:55:35.250942  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:35.251038  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:35.251122  191271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:55:35.261334  191271 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:55:35.261354  191271 command_runner.go:130] > [Unit]
	I0522 18:55:35.261362  191271 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:55:35.261370  191271 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:55:35.261375  191271 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:55:35.261384  191271 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:55:35.261391  191271 command_runner.go:130] > Wants=network-online.target
	I0522 18:55:35.261415  191271 command_runner.go:130] > Requires=docker.socket
	I0522 18:55:35.261432  191271 command_runner.go:130] > StartLimitBurst=3
	I0522 18:55:35.261443  191271 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:55:35.261451  191271 command_runner.go:130] > [Service]
	I0522 18:55:35.261457  191271 command_runner.go:130] > Type=notify
	I0522 18:55:35.261468  191271 command_runner.go:130] > Restart=on-failure
	I0522 18:55:35.261483  191271 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:55:35.261500  191271 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:55:35.261516  191271 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:55:35.261524  191271 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:55:35.261534  191271 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:55:35.261547  191271 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:55:35.261557  191271 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:55:35.261576  191271 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:55:35.261588  191271 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:55:35.261594  191271 command_runner.go:130] > ExecStart=
	I0522 18:55:35.261621  191271 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:55:35.261631  191271 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:55:35.261646  191271 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:55:35.261659  191271 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:55:35.261669  191271 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:55:35.261675  191271 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:55:35.261684  191271 command_runner.go:130] > LimitCORE=infinity
	I0522 18:55:35.261693  191271 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:55:35.261703  191271 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:55:35.261710  191271 command_runner.go:130] > TasksMax=infinity
	I0522 18:55:35.261720  191271 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:55:35.261728  191271 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:55:35.261736  191271 command_runner.go:130] > Delegate=yes
	I0522 18:55:35.261744  191271 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:55:35.261754  191271 command_runner.go:130] > KillMode=process
	I0522 18:55:35.261765  191271 command_runner.go:130] > [Install]
	I0522 18:55:35.261772  191271 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:55:35.262253  191271 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:55:35.262328  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:55:35.272378  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.286942  191271 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:55:35.287999  191271 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:55:35.290999  191271 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:55:35.291145  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:55:35.298279  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:55:35.315216  191271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:55:35.446839  191271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:55:35.548330  191271 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.548469  191271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:55:35.564761  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.639152  191271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:55:35.897209  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:55:35.908119  191271 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:55:35.918345  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:35.927683  191271 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:55:36.004999  191271 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:55:36.078400  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.150568  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:55:36.162061  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:36.171038  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.243030  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:55:36.303786  191271 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:55:36.303856  191271 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:55:36.307863  191271 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:55:36.307890  191271 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:55:36.307896  191271 command_runner.go:130] > Device: 41h/65d	Inode: 218         Links: 1
	I0522 18:55:36.307903  191271 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:55:36.307908  191271 command_runner.go:130] > Access: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307913  191271 command_runner.go:130] > Modify: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307919  191271 command_runner.go:130] > Change: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307922  191271 command_runner.go:130] >  Birth: -
	I0522 18:55:36.307945  191271 start.go:562] Will wait 60s for crictl version
	I0522 18:55:36.307977  191271 ssh_runner.go:195] Run: which crictl
	I0522 18:55:36.310791  191271 command_runner.go:130] > /usr/bin/crictl
	I0522 18:55:36.310921  191271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:55:36.342474  191271 command_runner.go:130] > Version:  0.1.0
	I0522 18:55:36.342498  191271 command_runner.go:130] > RuntimeName:  docker
	I0522 18:55:36.342505  191271 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:55:36.342511  191271 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:55:36.342526  191271 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:55:36.342561  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.363987  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.365226  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.387207  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.389505  191271 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:55:36.389579  191271 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:55:36.405602  191271 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:55:36.408842  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.418521  191271 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:55:36.418633  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:36.418681  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.434338  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.434356  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.434360  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.434365  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.434370  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.434376  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.434385  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.434392  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.434401  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.434411  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.435375  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.435391  191271 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:55:36.435443  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.451482  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.451502  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.451508  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.451513  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.451518  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.451523  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.451536  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.451540  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.451545  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.451553  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.452593  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.452609  191271 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:55:36.452620  191271 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:55:36.452743  191271 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:55:36.452799  191271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:55:36.491841  191271 command_runner.go:130] > cgroupfs
	I0522 18:55:36.493137  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:36.493150  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:36.493167  191271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:55:36.493191  191271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:55:36.493314  191271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:55:36.493364  191271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:55:36.500368  191271 command_runner.go:130] > kubeadm
	I0522 18:55:36.500385  191271 command_runner.go:130] > kubectl
	I0522 18:55:36.500390  191271 command_runner.go:130] > kubelet
	I0522 18:55:36.501014  191271 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:55:36.501074  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:55:36.508385  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:55:36.523332  191271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:55:36.537874  191271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:55:36.552595  191271 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:55:36.555448  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.564451  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.642902  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:36.654630  191271 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:55:36.654650  191271 certs.go:194] generating shared ca certs ...
	I0522 18:55:36.654663  191271 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:36.654795  191271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:55:36.654860  191271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:55:36.654873  191271 certs.go:256] generating profile certs ...
	I0522 18:55:36.654970  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:55:36.655041  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:55:36.655092  191271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:55:36.655106  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:55:36.655127  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:55:36.655145  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:55:36.655158  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:55:36.655171  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:55:36.655182  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:55:36.655196  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:55:36.655210  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:55:36.655259  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:55:36.655305  191271 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:55:36.655318  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:55:36.655347  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:55:36.655380  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:55:36.655406  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:55:36.655457  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:36.655490  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:55:36.655509  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:55:36.655527  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:36.656072  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:55:36.677388  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:55:36.698564  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:55:36.746137  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:55:36.774335  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:55:36.844940  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:55:36.867576  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:55:36.892332  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:55:36.915359  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:55:36.935989  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:55:36.956836  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:55:36.978204  191271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:55:36.992907  191271 ssh_runner.go:195] Run: openssl version
	I0522 18:55:36.997686  191271 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:55:36.997748  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:55:37.005519  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008401  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008425  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008462  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.014161  191271 command_runner.go:130] > 51391683
	I0522 18:55:37.014217  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:55:37.021650  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:55:37.029393  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032351  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032375  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032410  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.037998  191271 command_runner.go:130] > 3ec20f2e
	I0522 18:55:37.038254  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:55:37.045680  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:55:37.053800  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056711  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056742  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056791  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.062340  191271 command_runner.go:130] > b5213941
	I0522 18:55:37.062547  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:55:37.069967  191271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072857  191271 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072876  191271 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:37.072882  191271 command_runner.go:130] > Device: 801h/2049d	Inode: 1307017     Links: 1
	I0522 18:55:37.072888  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:37.072894  191271 command_runner.go:130] > Access: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072899  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072903  191271 command_runner.go:130] > Change: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072911  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072945  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:55:37.078522  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.078755  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:55:37.084341  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.084578  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:55:37.090035  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.090259  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:55:37.095704  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.095756  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:55:37.101044  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.101094  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:55:37.106347  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.106403  191271 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:37.106497  191271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:55:37.124843  191271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:55:37.132393  191271 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0522 18:55:37.132411  191271 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0522 18:55:37.132419  191271 command_runner.go:130] > /var/lib/minikube/etcd:
	I0522 18:55:37.132424  191271 command_runner.go:130] > member
	W0522 18:55:37.132447  191271 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:55:37.132459  191271 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:55:37.132465  191271 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:55:37.132505  191271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:55:37.139565  191271 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:55:37.139949  191271 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-737786" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140068  191271 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-737786" cluster setting kubeconfig missing "multinode-737786" context setting]
	I0522 18:55:37.140319  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.140688  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140913  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.141318  191271 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:55:37.141459  191271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:55:37.148863  191271 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.67.2
	I0522 18:55:37.148895  191271 kubeadm.go:591] duration metric: took 16.425758ms to restartPrimaryControlPlane
	I0522 18:55:37.148904  191271 kubeadm.go:393] duration metric: took 42.505287ms to StartCluster
	I0522 18:55:37.148931  191271 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.148985  191271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.149459  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.149654  191271 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:55:37.152713  191271 out.go:177] * Verifying Kubernetes components...
	I0522 18:55:37.149721  191271 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:55:37.149877  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:37.153954  191271 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:55:37.153961  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:37.153992  191271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:55:37.153957  191271 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:55:37.154051  191271 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	W0522 18:55:37.154065  191271 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:55:37.154096  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.154247  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.154486  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.171776  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.172020  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.173669  191271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:37.172259  191271 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	W0522 18:55:37.173707  191271 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:55:37.173740  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.174905  191271 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.174926  191271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:55:37.174967  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.174090  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.190845  191271 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.190870  191271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:55:37.190937  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.197226  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.210979  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.239056  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:37.249298  191271 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:37.249409  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.249419  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.249426  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.249430  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.249651  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.249672  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.292541  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.309037  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.371074  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.371130  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.371176  191271 retry.go:31] will retry after 264.181237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460775  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.460825  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460847  191271 retry.go:31] will retry after 133.777268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.595213  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.635676  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.749887  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.749959  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.749982  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.749999  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.750293  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.750342  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.844082  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.844160  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.844205  191271 retry.go:31] will retry after 478.031663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.853584  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.857211  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.857246  191271 retry.go:31] will retry after 515.22721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:38.249559  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:38.249587  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:38.249598  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:38.249602  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:38.323157  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:38.373635  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:39.946432  191271 round_trippers.go:574] Response Status: 200 OK in 1696 milliseconds
	I0522 18:55:39.946464  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.946474  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:39.946478  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.946482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.946485  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.946489  191271 round_trippers.go:580]     Audit-Id: 25c542b6-5d69-4e1f-b457-019f46d0b3c3
	I0522 18:55:39.946493  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.947402  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:39.948394  191271 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:55:39.948474  191271 node_ready.go:38] duration metric: took 2.699146059s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:39.948494  191271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:39.948584  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:39.948597  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:39.948606  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:39.948613  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:39.963427  191271 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0522 18:55:39.963451  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.963460  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.963465  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.963470  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.963473  191271 round_trippers.go:580]     Audit-Id: 29cd26ef-7452-4010-9449-59e360709035
	I0522 18:55:39.963477  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.963481  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.048655  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1526"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 57633 chars]
	I0522 18:55:40.053660  191271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.053824  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:55:40.053850  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.053870  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.053883  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.059346  191271 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:55:40.059421  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.059441  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.059454  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.059469  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:40.059497  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:40.059516  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.059527  191271 round_trippers.go:580]     Audit-Id: 93431c2f-ec1f-4fd8-800a-aecc0626a610
	I0522 18:55:40.059734  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:55:40.060346  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.060403  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.060422  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.060435  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.146472  191271 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0522 18:55:40.146560  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.146586  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.146596  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.146600  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.146604  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.146608  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.146631  191271 round_trippers.go:580]     Audit-Id: c46997f5-8ce7-490a-b32e-d6ef84d46be8
	I0522 18:55:40.146761  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.147186  191271 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.147238  191271 pod_ready.go:81] duration metric: took 93.497189ms for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147262  191271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147415  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:55:40.147441  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.147460  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.147477  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.150508  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.150572  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.150588  191271 round_trippers.go:580]     Audit-Id: f365bd66-16d5-494e-bd03-4c158b4f19e1
	I0522 18:55:40.150601  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.150627  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.150631  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.150634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.150638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.150819  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:55:40.151364  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.151381  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.151391  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.151399  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.152784  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.152801  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.152811  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.152818  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.152831  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.152835  191271 round_trippers.go:580]     Audit-Id: d3e15c5e-fb13-4bdb-9f2b-e5251d5bd358
	I0522 18:55:40.152845  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.152850  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.152966  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.153383  191271 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.153406  191271 pod_ready.go:81] duration metric: took 6.080227ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153421  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153519  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:55:40.153530  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.153540  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.153545  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.155159  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.155179  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.155188  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.155205  191271 round_trippers.go:580]     Audit-Id: bf655a4b-43df-4c1b-8ffa-6e7ba1c46ee2
	I0522 18:55:40.155210  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.155215  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.155228  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.155231  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.155475  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:55:40.156172  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.156186  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.156195  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.156200  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.157607  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.157621  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.157629  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.157634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.157638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.157643  191271 round_trippers.go:580]     Audit-Id: a4d9b4c1-ce60-4797-85b7-8f19f338b51d
	I0522 18:55:40.157646  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.157650  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.158173  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.158539  191271 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.158551  191271 pod_ready.go:81] duration metric: took 5.118553ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158561  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158613  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:55:40.158618  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.158628  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.158634  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.162612  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.162628  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.162637  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.162641  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.162647  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.162652  191271 round_trippers.go:580]     Audit-Id: 4541150f-2cd8-4ffe-962b-1e97b5fbf351
	I0522 18:55:40.162666  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.162671  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.163141  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:55:40.163704  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.163735  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.163746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.163769  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.167888  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:40.167909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.167918  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.167924  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.167928  191271 round_trippers.go:580]     Audit-Id: c457d921-e201-4732-9893-b1385b6f1926
	I0522 18:55:40.167950  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.167961  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.167965  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.168254  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.168604  191271 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.168619  191271 pod_ready.go:81] duration metric: took 10.05025ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168630  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168682  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:55:40.168687  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.168696  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.168746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171083  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.171096  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.171102  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.171106  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.171108  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.171111  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.171115  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.171118  191271 round_trippers.go:580]     Audit-Id: 60a65c08-d5cd-4e57-814c-1732c8213de5
	I0522 18:55:40.171338  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:55:40.171744  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.171761  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.171778  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171785  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.172894  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.172909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.172917  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.172923  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.172941  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.172949  191271 round_trippers.go:580]     Audit-Id: 4a8b876d-cd28-40b0-8c7f-d0d3dcdf9a8a
	I0522 18:55:40.172954  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.172960  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.173055  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.173292  191271 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.173304  191271 pod_ready.go:81] duration metric: took 4.667435ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.173312  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.250373  191271 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0522 18:55:40.253545  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.930348316s)
	I0522 18:55:40.253673  191271 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:55:40.253686  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.253693  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.253697  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.255762  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.255783  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.255791  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.255797  191271 round_trippers.go:580]     Content-Length: 1274
	I0522 18:55:40.255802  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.255806  191271 round_trippers.go:580]     Audit-Id: 29bedd89-fc34-4a6e-af90-ad42da35c8fd
	I0522 18:55:40.255818  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.255822  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.255827  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.255874  191271 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0522 18:55:40.256474  191271 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.256539  191271 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:55:40.256552  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.256570  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.256575  191271 round_trippers.go:473]     Content-Type: application/json
	I0522 18:55:40.256579  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.259420  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.259441  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.259451  191271 round_trippers.go:580]     Audit-Id: 82c4354f-9c05-40ba-a5a3-b7a52e45d257
	I0522 18:55:40.259456  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.259460  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.259463  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.259466  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.259468  191271 round_trippers.go:580]     Content-Length: 1220
	I0522 18:55:40.259471  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.259529  191271 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.349664  191271 request.go:629] Waited for 176.30464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349765  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349773  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.349781  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.349789  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.351637  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.351668  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.351677  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.351681  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.351685  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.351689  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.351694  191271 round_trippers.go:580]     Audit-Id: e542db4a-5526-4b20-9370-c944caf3811a
	I0522 18:55:40.351699  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.351857  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:55:40.548957  191271 request.go:629] Waited for 196.618236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549041  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549047  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.549058  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.549067  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.551086  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.551107  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.551116  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.551122  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.551129  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.551133  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.551139  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.551143  191271 round_trippers.go:580]     Audit-Id: 1e1c2f19-5fcc-4420-a18e-31b43efa6830
	I0522 18:55:40.551354  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.551644  191271 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.551661  191271 pod_ready.go:81] duration metric: took 378.342194ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.551670  191271 pod_ready.go:38] duration metric: took 603.166138ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:40.551692  191271 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:55:40.551735  191271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:55:40.607738  191271 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0522 18:55:40.607762  191271 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0522 18:55:40.607769  191271 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607776  191271 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607780  191271 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0522 18:55:40.607785  191271 command_runner.go:130] > pod/storage-provisioner configured
	I0522 18:55:40.607801  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23414082s)
	I0522 18:55:40.607851  191271 command_runner.go:130] > 1914
	I0522 18:55:40.607887  191271 api_server.go:72] duration metric: took 3.45820997s to wait for apiserver process to appear ...
	I0522 18:55:40.610759  191271 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:55:40.607897  191271 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:55:40.611942  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:40.611952  191271 addons.go:505] duration metric: took 3.462227154s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:55:40.615330  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:55:40.615348  191271 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:55:41.112944  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:41.117073  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:55:41.117157  191271 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:55:41.117169  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.117179  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.117183  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.118187  191271 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:55:41.118209  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.118219  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.118225  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.118233  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.118238  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.118246  191271 round_trippers.go:580]     Content-Length: 263
	I0522 18:55:41.118249  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.118253  191271 round_trippers.go:580]     Audit-Id: 3e08a892-73f4-4e1c-b61f-1d2036a1b85f
	I0522 18:55:41.118286  191271 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:55:41.118396  191271 api_server.go:141] control plane version: v1.30.1
	I0522 18:55:41.118421  191271 api_server.go:131] duration metric: took 506.49152ms to wait for apiserver health ...
	I0522 18:55:41.118429  191271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:55:41.118489  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.118499  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.118508  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.118517  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.121750  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:41.121770  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.121780  191271 round_trippers.go:580]     Audit-Id: d8ba21dd-cd96-4231-9557-114d06d5b330
	I0522 18:55:41.121787  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.121802  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.121807  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.121824  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.121832  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.122518  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.124999  191271 system_pods.go:59] 8 kube-system pods found
	I0522 18:55:41.125049  191271 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.125064  191271 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.125078  191271 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.125095  191271 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.125108  191271 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.125123  191271 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.125135  191271 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.125143  191271 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.125153  191271 system_pods.go:74] duration metric: took 6.71923ms to wait for pod list to return data ...
	I0522 18:55:41.125171  191271 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:55:41.125259  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:55:41.125270  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.125279  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.125284  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.127424  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.127447  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.127456  191271 round_trippers.go:580]     Audit-Id: a747672b-a20b-4af5-ade4-ea4b67829eed
	I0522 18:55:41.127461  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.127465  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.127482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.127491  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.127494  191271 round_trippers.go:580]     Content-Length: 262
	I0522 18:55:41.127497  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.127525  191271 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:55:41.127713  191271 default_sa.go:45] found service account: "default"
	I0522 18:55:41.127733  191271 default_sa.go:55] duration metric: took 2.553683ms for default service account to be created ...
	I0522 18:55:41.127742  191271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:55:41.149070  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.149091  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.149101  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.149106  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.153123  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:41.153141  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.153148  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.153151  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.153153  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.153156  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.153158  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.153161  191271 round_trippers.go:580]     Audit-Id: 2b84be8b-87e2-4071-b787-4728703fa23e
	I0522 18:55:41.154286  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.157071  191271 system_pods.go:86] 8 kube-system pods found
	I0522 18:55:41.157102  191271 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.157114  191271 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.157125  191271 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.157143  191271 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.157161  191271 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.157175  191271 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.157188  191271 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.157216  191271 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.157231  191271 system_pods.go:126] duration metric: took 29.478851ms to wait for k8s-apps to be running ...
	I0522 18:55:41.157247  191271 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:55:41.157295  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:55:41.169086  191271 system_svc.go:56] duration metric: took 11.831211ms WaitForService to wait for kubelet
	I0522 18:55:41.169113  191271 kubeadm.go:576] duration metric: took 4.019434744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:41.169134  191271 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:55:41.349440  191271 request.go:629] Waited for 180.210127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349507  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349515  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.349525  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.349532  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.352161  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.352182  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.352190  191271 round_trippers.go:580]     Audit-Id: 7838a856-6baa-4e95-bfcc-54203cf8503d
	I0522 18:55:41.352195  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.352201  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.352206  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.352219  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.352223  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.352341  191271 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 5264 chars]
	I0522 18:55:41.352807  191271 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:55:41.352842  191271 node_conditions.go:123] node cpu capacity is 8
	I0522 18:55:41.352854  191271 node_conditions.go:105] duration metric: took 183.714016ms to run NodePressure ...
	I0522 18:55:41.352869  191271 start.go:240] waiting for startup goroutines ...
	I0522 18:55:41.352879  191271 start.go:245] waiting for cluster config update ...
	I0522 18:55:41.352892  191271 start.go:254] writing updated cluster config ...
	I0522 18:55:41.354996  191271 out.go:177] 
	I0522 18:55:41.356517  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:41.356594  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.358237  191271 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:55:41.359528  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:41.360862  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:41.362023  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:41.362046  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:41.362122  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:41.362131  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:41.362129  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:41.362226  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.379872  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:41.379903  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:41.379925  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:41.379963  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:41.380037  191271 start.go:364] duration metric: took 46.895µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:41.380065  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:41.380079  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:41.380381  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.396179  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Stopped err=<nil>
	W0522 18:55:41.396216  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:41.398486  191271 out.go:177] * Restarting existing docker container for "multinode-737786-m02" ...
	I0522 18:55:41.399852  191271 cli_runner.go:164] Run: docker start multinode-737786-m02
	I0522 18:55:41.739232  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.759108  191271 kic.go:430] container "multinode-737786-m02" state is running.
	I0522 18:55:41.759532  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:41.781733  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:41.781802  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:41.800873  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	W0522 18:55:41.801752  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.801799  191271 retry.go:31] will retry after 178.387586ms: ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	W0522 18:55:41.981559  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.981589  191271 retry.go:31] will retry after 356.566239ms: ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:42.427258  191271 command_runner.go:130] > 27%
	I0522 18:55:42.427545  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:42.431134  191271 command_runner.go:130] > 213G
	I0522 18:55:42.431367  191271 fix.go:56] duration metric: took 1.051284151s for fixHost
	I0522 18:55:42.431388  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1.051331877s
	W0522 18:55:42.431406  191271 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:55:42.431491  191271 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:55:42.431503  191271 start.go:728] Will try again in 5 seconds ...
	I0522 18:55:47.432624  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:47.432726  191271 start.go:364] duration metric: took 68.501µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:47.432755  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:47.432767  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:47.433049  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:47.449030  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Running err=<nil>
	W0522 18:55:47.449055  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:47.451390  191271 out.go:177] * Updating the running docker "multinode-737786-m02" container ...
	I0522 18:55:47.452545  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:47.452614  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.468745  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.468930  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.468943  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:47.578455  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.578487  191271 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:55:47.578548  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.595125  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.595343  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.595360  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:55:47.721219  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.721292  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.737411  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.737578  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.737594  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:47.850975  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:47.851002  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:47.851027  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:47.851040  191271 provision.go:84] configureAuth start
	I0522 18:55:47.851098  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.865910  191271 provision.go:87] duration metric: took 14.860061ms to configureAuth
	W0522 18:55:47.865931  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.865957  191271 retry.go:31] will retry after 87.876µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.867083  191271 provision.go:84] configureAuth start
	I0522 18:55:47.867151  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.883908  191271 provision.go:87] duration metric: took 16.806772ms to configureAuth
	W0522 18:55:47.883927  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.883942  191271 retry.go:31] will retry after 102.785µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.885049  191271 provision.go:84] configureAuth start
	I0522 18:55:47.885127  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.899850  191271 provision.go:87] duration metric: took 14.775266ms to configureAuth
	W0522 18:55:47.899866  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.899883  191271 retry.go:31] will retry after 127.962µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.900992  191271 provision.go:84] configureAuth start
	I0522 18:55:47.901044  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.915918  191271 provision.go:87] duration metric: took 14.910204ms to configureAuth
	W0522 18:55:47.915936  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.915950  191271 retry.go:31] will retry after 176.177µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.917057  191271 provision.go:84] configureAuth start
	I0522 18:55:47.917110  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.933132  191271 provision.go:87] duration metric: took 16.057912ms to configureAuth
	W0522 18:55:47.933147  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.933162  191271 retry.go:31] will retry after 415.738µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.934277  191271 provision.go:84] configureAuth start
	I0522 18:55:47.934340  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.949561  191271 provision.go:87] duration metric: took 15.2663ms to configureAuth
	W0522 18:55:47.949578  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.949593  191271 retry.go:31] will retry after 695.271µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.950702  191271 provision.go:84] configureAuth start
	I0522 18:55:47.950753  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.965237  191271 provision.go:87] duration metric: took 14.518838ms to configureAuth
	W0522 18:55:47.965256  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.965273  191271 retry.go:31] will retry after 624.889µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.966387  191271 provision.go:84] configureAuth start
	I0522 18:55:47.966449  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.981238  191271 provision.go:87] duration metric: took 14.830065ms to configureAuth
	W0522 18:55:47.981257  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.981273  191271 retry.go:31] will retry after 1.057459ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.982393  191271 provision.go:84] configureAuth start
	I0522 18:55:47.982466  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.998674  191271 provision.go:87] duration metric: took 16.255395ms to configureAuth
	W0522 18:55:47.998692  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.998712  191271 retry.go:31] will retry after 2.801269ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.001909  191271 provision.go:84] configureAuth start
	I0522 18:55:48.001983  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.017417  191271 provision.go:87] duration metric: took 15.487122ms to configureAuth
	W0522 18:55:48.017438  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.017457  191271 retry.go:31] will retry after 2.6692ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.020641  191271 provision.go:84] configureAuth start
	I0522 18:55:48.020707  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.035890  191271 provision.go:87] duration metric: took 15.231178ms to configureAuth
	W0522 18:55:48.035907  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.035925  191271 retry.go:31] will retry after 4.913205ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.041121  191271 provision.go:84] configureAuth start
	I0522 18:55:48.041190  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.056341  191271 provision.go:87] duration metric: took 15.201859ms to configureAuth
	W0522 18:55:48.056358  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.056374  191271 retry.go:31] will retry after 8.73344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.065553  191271 provision.go:84] configureAuth start
	I0522 18:55:48.065620  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.080469  191271 provision.go:87] duration metric: took 14.898331ms to configureAuth
	W0522 18:55:48.080489  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.080506  191271 retry.go:31] will retry after 13.355259ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.094679  191271 provision.go:84] configureAuth start
	I0522 18:55:48.094748  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.109923  191271 provision.go:87] duration metric: took 15.225024ms to configureAuth
	W0522 18:55:48.109942  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.109959  191271 retry.go:31] will retry after 17.591086ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.128159  191271 provision.go:84] configureAuth start
	I0522 18:55:48.128244  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.143258  191271 provision.go:87] duration metric: took 15.081459ms to configureAuth
	W0522 18:55:48.143309  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.143328  191271 retry.go:31] will retry after 30.694182ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.174523  191271 provision.go:84] configureAuth start
	I0522 18:55:48.174643  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.190339  191271 provision.go:87] duration metric: took 15.791254ms to configureAuth
	W0522 18:55:48.190355  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.190371  191271 retry.go:31] will retry after 60.478865ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.251580  191271 provision.go:84] configureAuth start
	I0522 18:55:48.251680  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.267446  191271 provision.go:87] duration metric: took 15.839853ms to configureAuth
	W0522 18:55:48.267466  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.267484  191271 retry.go:31] will retry after 63.884927ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.331706  191271 provision.go:84] configureAuth start
	I0522 18:55:48.331794  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.347085  191271 provision.go:87] duration metric: took 15.328539ms to configureAuth
	W0522 18:55:48.347105  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.347122  191271 retry.go:31] will retry after 87.655661ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.435332  191271 provision.go:84] configureAuth start
	I0522 18:55:48.435425  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.451751  191271 provision.go:87] duration metric: took 16.388799ms to configureAuth
	W0522 18:55:48.451774  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.451793  191271 retry.go:31] will retry after 195.353755ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.648137  191271 provision.go:84] configureAuth start
	I0522 18:55:48.648216  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.663505  191271 provision.go:87] duration metric: took 15.339444ms to configureAuth
	W0522 18:55:48.663523  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.663539  191271 retry.go:31] will retry after 289.097561ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.952931  191271 provision.go:84] configureAuth start
	I0522 18:55:48.953045  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.968997  191271 provision.go:87] duration metric: took 16.035059ms to configureAuth
	W0522 18:55:48.969019  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.969037  191271 retry.go:31] will retry after 186.761832ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.156383  191271 provision.go:84] configureAuth start
	I0522 18:55:49.156459  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.173159  191271 provision.go:87] duration metric: took 16.748544ms to configureAuth
	W0522 18:55:49.173181  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.173199  191271 retry.go:31] will retry after 327.938905ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.501699  191271 provision.go:84] configureAuth start
	I0522 18:55:49.501785  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.517950  191271 provision.go:87] duration metric: took 16.220449ms to configureAuth
	W0522 18:55:49.517970  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.517987  191271 retry.go:31] will retry after 817.802375ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.336261  191271 provision.go:84] configureAuth start
	I0522 18:55:50.336358  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:50.352199  191271 provision.go:87] duration metric: took 15.908402ms to configureAuth
	W0522 18:55:50.352217  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.352235  191271 retry.go:31] will retry after 975.249665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.327901  191271 provision.go:84] configureAuth start
	I0522 18:55:51.327997  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:51.343571  191271 provision.go:87] duration metric: took 15.641557ms to configureAuth
	W0522 18:55:51.343589  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.343604  191271 retry.go:31] will retry after 1.511582383s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.855327  191271 provision.go:84] configureAuth start
	I0522 18:55:52.855421  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:52.874130  191271 provision.go:87] duration metric: took 18.776068ms to configureAuth
	W0522 18:55:52.874152  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.874173  191271 retry.go:31] will retry after 2.587827778s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.462838  191271 provision.go:84] configureAuth start
	I0522 18:55:55.462920  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:55.479954  191271 provision.go:87] duration metric: took 17.080473ms to configureAuth
	W0522 18:55:55.479973  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.479992  191271 retry.go:31] will retry after 4.788436213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.268555  191271 provision.go:84] configureAuth start
	I0522 18:56:00.268664  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:00.284768  191271 provision.go:87] duration metric: took 16.187921ms to configureAuth
	W0522 18:56:00.284787  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.284804  191271 retry.go:31] will retry after 4.16940433s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.458082  191271 provision.go:84] configureAuth start
	I0522 18:56:04.458158  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:04.474138  191271 provision.go:87] duration metric: took 16.031529ms to configureAuth
	W0522 18:56:04.474155  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.474171  191271 retry.go:31] will retry after 11.936949428s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.411971  191271 provision.go:84] configureAuth start
	I0522 18:56:16.412062  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:16.427556  191271 provision.go:87] duration metric: took 15.558638ms to configureAuth
	W0522 18:56:16.427574  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.427592  191271 retry.go:31] will retry after 9.484561192s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.912297  191271 provision.go:84] configureAuth start
	I0522 18:56:25.912384  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:25.927852  191271 provision.go:87] duration metric: took 15.527116ms to configureAuth
	W0522 18:56:25.927874  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.927894  191271 retry.go:31] will retry after 27.958237861s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.888233  191271 provision.go:84] configureAuth start
	I0522 18:56:53.888316  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:53.906509  191271 provision.go:87] duration metric: took 18.250582ms to configureAuth
	W0522 18:56:53.906529  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.906545  191271 retry.go:31] will retry after 38.774225348s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.682746  191271 provision.go:84] configureAuth start
	I0522 18:57:32.682888  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:57:32.700100  191271 provision.go:87] duration metric: took 17.312123ms to configureAuth
	W0522 18:57:32.700120  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700141  191271 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700149  191271 machine.go:97] duration metric: took 1m45.247591588s to provisionDockerMachine
	I0522 18:57:32.700204  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:57:32.700240  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:57:32.716059  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:57:32.795615  191271 command_runner.go:130] > 27%
	I0522 18:57:32.795930  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:57:32.799643  191271 command_runner.go:130] > 213G
	I0522 18:57:32.799841  191271 fix.go:56] duration metric: took 1m45.367071968s for fixHost
	I0522 18:57:32.799861  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m45.367119086s
	W0522 18:57:32.799939  191271 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.801845  191271 out.go:177] 
	W0522 18:57:32.802985  191271 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:57:32.802997  191271 out.go:239] * 
	* 
	W0522 18:57:32.803803  191271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:57:32.805200  191271 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-linux-amd64 node list -p multinode-737786" : exit status 80
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-737786
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 191553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:55:30.428700973Z",
	            "FinishedAt": "2024-05-22T18:55:29.739597027Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e5d9c4f018f85e131e1e3e35160c3be5874cc3e9e983a114ff800193704e1cf",
	            "SandboxKey": "/var/run/docker/netns/5e5d9c4f018f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "db5b9713a729684619c46904638292c75dda74a2b3239964bd21c539163cbff6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-737786 node stop m03                                                          | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	| node    | multinode-737786 node start                                                             | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	| stop    | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC | 22 May 24 18:55 UTC |
	| start   | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:57 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:55:30.016582  191271 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:55:30.016705  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016722  191271 out.go:304] Setting ErrFile to fd 2...
	I0522 18:55:30.016730  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016907  191271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:55:30.017442  191271 out.go:298] Setting JSON to false
	I0522 18:55:30.018352  191271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5874,"bootTime":1716398256,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:55:30.018407  191271 start.go:139] virtualization: kvm guest
	I0522 18:55:30.020609  191271 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:55:30.022032  191271 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:55:30.023205  191271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:55:30.022039  191271 notify.go:220] Checking for updates...
	I0522 18:55:30.024646  191271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:30.025941  191271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:55:30.027248  191271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:55:30.028476  191271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:55:30.030067  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:30.030140  191271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:55:30.051240  191271 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:55:30.051370  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.102381  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.093628495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.102490  191271 docker.go:295] overlay module found
	I0522 18:55:30.104504  191271 out.go:177] * Using the docker driver based on existing profile
	I0522 18:55:30.105610  191271 start.go:297] selected driver: docker
	I0522 18:55:30.105625  191271 start.go:901] validating driver "docker" against &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.105706  191271 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:55:30.105775  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.148150  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.139765007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.149022  191271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:30.149059  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:30.149071  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:30.149133  191271 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.151019  191271 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:55:30.152138  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:30.153345  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:30.154404  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:30.154431  191271 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:55:30.154440  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:30.154497  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:30.154509  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:30.154516  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:30.154599  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.169685  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:30.169705  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:30.169727  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:30.169758  191271 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:30.169840  191271 start.go:364] duration metric: took 44.168µs to acquireMachinesLock for "multinode-737786"
	I0522 18:55:30.169862  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:30.169876  191271 fix.go:54] fixHost starting: 
	I0522 18:55:30.170113  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.186497  191271 fix.go:112] recreateIfNeeded on multinode-737786: state=Stopped err=<nil>
	W0522 18:55:30.186530  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:30.188329  191271 out.go:177] * Restarting existing docker container for "multinode-737786" ...
	I0522 18:55:30.189575  191271 cli_runner.go:164] Run: docker start multinode-737786
	I0522 18:55:30.434280  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.450599  191271 kic.go:430] container "multinode-737786" state is running.
	I0522 18:55:30.450960  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:30.469222  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.469408  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:30.469451  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:30.486145  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:30.486342  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:30.486358  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:30.486939  191271 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33870->127.0.0.1:32927: read: connection reset by peer
	I0522 18:55:33.598615  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.598642  191271 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:55:33.598705  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.616028  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.616267  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.616289  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:55:33.737498  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.737589  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.753768  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.753939  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.753956  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:33.862867  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:33.862895  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:33.862922  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:33.862933  191271 provision.go:84] configureAuth start
	I0522 18:55:33.862986  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:33.879102  191271 provision.go:143] copyHostCerts
	I0522 18:55:33.879142  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879166  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:55:33.879178  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879240  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:55:33.879346  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879366  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:55:33.879370  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879398  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:55:33.879456  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879472  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:55:33.879476  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879500  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:55:33.879560  191271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:55:33.981006  191271 provision.go:177] copyRemoteCerts
	I0522 18:55:33.981066  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:55:33.981098  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.997545  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.083209  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:55:34.083291  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:55:34.103441  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:55:34.103506  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:55:34.123440  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:55:34.123484  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:55:34.142960  191271 provision.go:87] duration metric: took 280.016987ms to configureAuth
	I0522 18:55:34.142986  191271 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:55:34.143149  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:34.143191  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.159108  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.159288  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.159303  191271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:55:34.271284  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:55:34.271307  191271 ubuntu.go:71] root file system type: overlay
	I0522 18:55:34.271413  191271 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:55:34.271478  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.287895  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.288060  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.288123  191271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:55:34.412978  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:55:34.413065  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.429426  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.429609  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.429634  191271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:55:34.543660  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:34.543688  191271 machine.go:97] duration metric: took 4.074267152s to provisionDockerMachine
	I0522 18:55:34.543701  191271 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:55:34.543714  191271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:55:34.543786  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:55:34.543829  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.560130  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.642945  191271 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:55:34.645547  191271 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:55:34.645562  191271 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:55:34.645568  191271 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:55:34.645579  191271 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:55:34.645586  191271 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:55:34.645590  191271 command_runner.go:130] > ID=ubuntu
	I0522 18:55:34.645594  191271 command_runner.go:130] > ID_LIKE=debian
	I0522 18:55:34.645599  191271 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:55:34.645603  191271 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:55:34.645609  191271 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:55:34.645615  191271 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:55:34.645619  191271 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:55:34.645674  191271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:55:34.645696  191271 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:55:34.645706  191271 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:55:34.645714  191271 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:55:34.645725  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:55:34.645767  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:55:34.645841  191271 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:55:34.645853  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:55:34.645929  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:55:34.653086  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:34.672745  191271 start.go:296] duration metric: took 129.030542ms for postStartSetup
	I0522 18:55:34.672809  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:34.672852  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.688507  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.767346  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:55:34.767631  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:34.771441  191271 command_runner.go:130] > 213G
	I0522 18:55:34.771575  191271 fix.go:56] duration metric: took 4.601701145s for fixHost
	I0522 18:55:34.771595  191271 start.go:83] releasing machines lock for "multinode-737786", held for 4.601740929s
	I0522 18:55:34.771653  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:34.787192  191271 ssh_runner.go:195] Run: cat /version.json
	I0522 18:55:34.787232  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.787317  191271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:55:34.787371  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.803468  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.803975  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.962314  191271 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:55:34.964188  191271 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:55:34.964307  191271 ssh_runner.go:195] Run: systemctl --version
	I0522 18:55:34.968188  191271 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:55:34.968212  191271 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:55:34.968386  191271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:55:34.972176  191271 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:55:34.972197  191271 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:34.972207  191271 command_runner.go:130] > Device: 37h/55d	Inode: 1306969     Links: 1
	I0522 18:55:34.972215  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:34.972234  191271 command_runner.go:130] > Access: 2024-05-22 18:32:26.662663204 +0000
	I0522 18:55:34.972243  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972251  191271 command_runner.go:130] > Change: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972259  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972314  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:55:34.987621  191271 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:55:34.987680  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:55:34.994995  191271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:55:34.995017  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:34.995044  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:34.995149  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.008015  191271 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:55:35.008981  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:55:35.017393  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:55:35.027698  191271 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.027743  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:55:35.036084  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.044052  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:55:35.052258  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.060384  191271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:55:35.067811  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:55:35.075774  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:55:35.083880  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:55:35.091876  191271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:55:35.098619  191271 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:55:35.098662  191271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:55:35.105547  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.177710  191271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:55:35.250942  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:35.251038  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:35.251122  191271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:55:35.261334  191271 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:55:35.261354  191271 command_runner.go:130] > [Unit]
	I0522 18:55:35.261362  191271 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:55:35.261370  191271 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:55:35.261375  191271 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:55:35.261384  191271 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:55:35.261391  191271 command_runner.go:130] > Wants=network-online.target
	I0522 18:55:35.261415  191271 command_runner.go:130] > Requires=docker.socket
	I0522 18:55:35.261432  191271 command_runner.go:130] > StartLimitBurst=3
	I0522 18:55:35.261443  191271 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:55:35.261451  191271 command_runner.go:130] > [Service]
	I0522 18:55:35.261457  191271 command_runner.go:130] > Type=notify
	I0522 18:55:35.261468  191271 command_runner.go:130] > Restart=on-failure
	I0522 18:55:35.261483  191271 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:55:35.261500  191271 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:55:35.261516  191271 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:55:35.261524  191271 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:55:35.261534  191271 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:55:35.261547  191271 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:55:35.261557  191271 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:55:35.261576  191271 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:55:35.261588  191271 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:55:35.261594  191271 command_runner.go:130] > ExecStart=
	I0522 18:55:35.261621  191271 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:55:35.261631  191271 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:55:35.261646  191271 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:55:35.261659  191271 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:55:35.261669  191271 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:55:35.261675  191271 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:55:35.261684  191271 command_runner.go:130] > LimitCORE=infinity
	I0522 18:55:35.261693  191271 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:55:35.261703  191271 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:55:35.261710  191271 command_runner.go:130] > TasksMax=infinity
	I0522 18:55:35.261720  191271 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:55:35.261728  191271 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:55:35.261736  191271 command_runner.go:130] > Delegate=yes
	I0522 18:55:35.261744  191271 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:55:35.261754  191271 command_runner.go:130] > KillMode=process
	I0522 18:55:35.261765  191271 command_runner.go:130] > [Install]
	I0522 18:55:35.261772  191271 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:55:35.262253  191271 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:55:35.262328  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:55:35.272378  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.286942  191271 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:55:35.287999  191271 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:55:35.290999  191271 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:55:35.291145  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:55:35.298279  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:55:35.315216  191271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:55:35.446839  191271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:55:35.548330  191271 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.548469  191271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:55:35.564761  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.639152  191271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:55:35.897209  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:55:35.908119  191271 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:55:35.918345  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:35.927683  191271 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:55:36.004999  191271 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:55:36.078400  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.150568  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:55:36.162061  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:36.171038  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.243030  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:55:36.303786  191271 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:55:36.303856  191271 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:55:36.307863  191271 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:55:36.307890  191271 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:55:36.307896  191271 command_runner.go:130] > Device: 41h/65d	Inode: 218         Links: 1
	I0522 18:55:36.307903  191271 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:55:36.307908  191271 command_runner.go:130] > Access: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307913  191271 command_runner.go:130] > Modify: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307919  191271 command_runner.go:130] > Change: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307922  191271 command_runner.go:130] >  Birth: -
	I0522 18:55:36.307945  191271 start.go:562] Will wait 60s for crictl version
	I0522 18:55:36.307977  191271 ssh_runner.go:195] Run: which crictl
	I0522 18:55:36.310791  191271 command_runner.go:130] > /usr/bin/crictl
	I0522 18:55:36.310921  191271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:55:36.342474  191271 command_runner.go:130] > Version:  0.1.0
	I0522 18:55:36.342498  191271 command_runner.go:130] > RuntimeName:  docker
	I0522 18:55:36.342505  191271 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:55:36.342511  191271 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:55:36.342526  191271 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:55:36.342561  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.363987  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.365226  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.387207  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.389505  191271 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:55:36.389579  191271 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:55:36.405602  191271 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:55:36.408842  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.418521  191271 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:55:36.418633  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:36.418681  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.434338  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.434356  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.434360  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.434365  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.434370  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.434376  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.434385  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.434392  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.434401  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.434411  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.435375  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.435391  191271 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:55:36.435443  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.451482  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.451502  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.451508  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.451513  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.451518  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.451523  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.451536  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.451540  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.451545  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.451553  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.452593  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.452609  191271 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:55:36.452620  191271 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:55:36.452743  191271 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:55:36.452799  191271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:55:36.491841  191271 command_runner.go:130] > cgroupfs
	I0522 18:55:36.493137  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:36.493150  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:36.493167  191271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:55:36.493191  191271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:55:36.493314  191271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:55:36.493364  191271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:55:36.500368  191271 command_runner.go:130] > kubeadm
	I0522 18:55:36.500385  191271 command_runner.go:130] > kubectl
	I0522 18:55:36.500390  191271 command_runner.go:130] > kubelet
	I0522 18:55:36.501014  191271 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:55:36.501074  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:55:36.508385  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:55:36.523332  191271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:55:36.537874  191271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:55:36.552595  191271 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:55:36.555448  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.564451  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.642902  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:36.654630  191271 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:55:36.654650  191271 certs.go:194] generating shared ca certs ...
	I0522 18:55:36.654663  191271 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:36.654795  191271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:55:36.654860  191271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:55:36.654873  191271 certs.go:256] generating profile certs ...
	I0522 18:55:36.654970  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:55:36.655041  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:55:36.655092  191271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:55:36.655106  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:55:36.655127  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:55:36.655145  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:55:36.655158  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:55:36.655171  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:55:36.655182  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:55:36.655196  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:55:36.655210  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:55:36.655259  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:55:36.655305  191271 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:55:36.655318  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:55:36.655347  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:55:36.655380  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:55:36.655406  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:55:36.655457  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:36.655490  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:55:36.655509  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:55:36.655527  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:36.656072  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:55:36.677388  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:55:36.698564  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:55:36.746137  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:55:36.774335  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:55:36.844940  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:55:36.867576  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:55:36.892332  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:55:36.915359  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:55:36.935989  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:55:36.956836  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:55:36.978204  191271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:55:36.992907  191271 ssh_runner.go:195] Run: openssl version
	I0522 18:55:36.997686  191271 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:55:36.997748  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:55:37.005519  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008401  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008425  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008462  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.014161  191271 command_runner.go:130] > 51391683
	I0522 18:55:37.014217  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:55:37.021650  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:55:37.029393  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032351  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032375  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032410  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.037998  191271 command_runner.go:130] > 3ec20f2e
	I0522 18:55:37.038254  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:55:37.045680  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:55:37.053800  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056711  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056742  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056791  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.062340  191271 command_runner.go:130] > b5213941
	I0522 18:55:37.062547  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:55:37.069967  191271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072857  191271 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072876  191271 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:37.072882  191271 command_runner.go:130] > Device: 801h/2049d	Inode: 1307017     Links: 1
	I0522 18:55:37.072888  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:37.072894  191271 command_runner.go:130] > Access: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072899  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072903  191271 command_runner.go:130] > Change: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072911  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072945  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:55:37.078522  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.078755  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:55:37.084341  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.084578  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:55:37.090035  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.090259  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:55:37.095704  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.095756  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:55:37.101044  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.101094  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:55:37.106347  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.106403  191271 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:37.106497  191271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:55:37.124843  191271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:55:37.132393  191271 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0522 18:55:37.132411  191271 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0522 18:55:37.132419  191271 command_runner.go:130] > /var/lib/minikube/etcd:
	I0522 18:55:37.132424  191271 command_runner.go:130] > member
	W0522 18:55:37.132447  191271 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:55:37.132459  191271 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:55:37.132465  191271 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:55:37.132505  191271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:55:37.139565  191271 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:55:37.139949  191271 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-737786" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140068  191271 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-737786" cluster setting kubeconfig missing "multinode-737786" context setting]
	I0522 18:55:37.140319  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.140688  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140913  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.141318  191271 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:55:37.141459  191271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:55:37.148863  191271 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.67.2
	I0522 18:55:37.148895  191271 kubeadm.go:591] duration metric: took 16.425758ms to restartPrimaryControlPlane
	I0522 18:55:37.148904  191271 kubeadm.go:393] duration metric: took 42.505287ms to StartCluster
	I0522 18:55:37.148931  191271 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.148985  191271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.149459  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.149654  191271 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:55:37.152713  191271 out.go:177] * Verifying Kubernetes components...
	I0522 18:55:37.149721  191271 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:55:37.149877  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:37.153954  191271 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:55:37.153961  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:37.153992  191271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:55:37.153957  191271 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:55:37.154051  191271 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	W0522 18:55:37.154065  191271 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:55:37.154096  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.154247  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.154486  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.171776  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.172020  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.173669  191271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:37.172259  191271 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	W0522 18:55:37.173707  191271 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:55:37.173740  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.174905  191271 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.174926  191271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:55:37.174967  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.174090  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.190845  191271 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.190870  191271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:55:37.190937  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.197226  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.210979  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.239056  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:37.249298  191271 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:37.249409  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.249419  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.249426  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.249430  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.249651  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.249672  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.292541  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.309037  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.371074  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.371130  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.371176  191271 retry.go:31] will retry after 264.181237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460775  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.460825  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460847  191271 retry.go:31] will retry after 133.777268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.595213  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.635676  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.749887  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.749959  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.749982  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.749999  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.750293  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.750342  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.844082  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.844160  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.844205  191271 retry.go:31] will retry after 478.031663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.853584  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.857211  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.857246  191271 retry.go:31] will retry after 515.22721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:38.249559  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:38.249587  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:38.249598  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:38.249602  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:38.323157  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:38.373635  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:39.946432  191271 round_trippers.go:574] Response Status: 200 OK in 1696 milliseconds
	I0522 18:55:39.946464  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.946474  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:39.946478  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.946482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.946485  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.946489  191271 round_trippers.go:580]     Audit-Id: 25c542b6-5d69-4e1f-b457-019f46d0b3c3
	I0522 18:55:39.946493  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.947402  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:39.948394  191271 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:55:39.948474  191271 node_ready.go:38] duration metric: took 2.699146059s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:39.948494  191271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:39.948584  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:39.948597  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:39.948606  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:39.948613  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:39.963427  191271 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0522 18:55:39.963451  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.963460  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.963465  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.963470  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.963473  191271 round_trippers.go:580]     Audit-Id: 29cd26ef-7452-4010-9449-59e360709035
	I0522 18:55:39.963477  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.963481  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.048655  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1526"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 57633 chars]
	I0522 18:55:40.053660  191271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.053824  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:55:40.053850  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.053870  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.053883  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.059346  191271 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:55:40.059421  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.059441  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.059454  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.059469  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:40.059497  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:40.059516  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.059527  191271 round_trippers.go:580]     Audit-Id: 93431c2f-ec1f-4fd8-800a-aecc0626a610
	I0522 18:55:40.059734  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:55:40.060346  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.060403  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.060422  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.060435  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.146472  191271 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0522 18:55:40.146560  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.146586  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.146596  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.146600  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.146604  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.146608  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.146631  191271 round_trippers.go:580]     Audit-Id: c46997f5-8ce7-490a-b32e-d6ef84d46be8
	I0522 18:55:40.146761  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.147186  191271 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.147238  191271 pod_ready.go:81] duration metric: took 93.497189ms for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147262  191271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147415  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:55:40.147441  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.147460  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.147477  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.150508  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.150572  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.150588  191271 round_trippers.go:580]     Audit-Id: f365bd66-16d5-494e-bd03-4c158b4f19e1
	I0522 18:55:40.150601  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.150627  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.150631  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.150634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.150638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.150819  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:55:40.151364  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.151381  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.151391  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.151399  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.152784  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.152801  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.152811  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.152818  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.152831  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.152835  191271 round_trippers.go:580]     Audit-Id: d3e15c5e-fb13-4bdb-9f2b-e5251d5bd358
	I0522 18:55:40.152845  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.152850  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.152966  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.153383  191271 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.153406  191271 pod_ready.go:81] duration metric: took 6.080227ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153421  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153519  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:55:40.153530  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.153540  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.153545  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.155159  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.155179  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.155188  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.155205  191271 round_trippers.go:580]     Audit-Id: bf655a4b-43df-4c1b-8ffa-6e7ba1c46ee2
	I0522 18:55:40.155210  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.155215  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.155228  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.155231  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.155475  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:55:40.156172  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.156186  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.156195  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.156200  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.157607  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.157621  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.157629  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.157634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.157638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.157643  191271 round_trippers.go:580]     Audit-Id: a4d9b4c1-ce60-4797-85b7-8f19f338b51d
	I0522 18:55:40.157646  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.157650  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.158173  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.158539  191271 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.158551  191271 pod_ready.go:81] duration metric: took 5.118553ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158561  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158613  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:55:40.158618  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.158628  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.158634  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.162612  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.162628  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.162637  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.162641  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.162647  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.162652  191271 round_trippers.go:580]     Audit-Id: 4541150f-2cd8-4ffe-962b-1e97b5fbf351
	I0522 18:55:40.162666  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.162671  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.163141  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:55:40.163704  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.163735  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.163746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.163769  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.167888  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:40.167909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.167918  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.167924  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.167928  191271 round_trippers.go:580]     Audit-Id: c457d921-e201-4732-9893-b1385b6f1926
	I0522 18:55:40.167950  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.167961  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.167965  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.168254  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.168604  191271 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.168619  191271 pod_ready.go:81] duration metric: took 10.05025ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168630  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168682  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:55:40.168687  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.168696  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.168746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171083  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.171096  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.171102  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.171106  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.171108  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.171111  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.171115  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.171118  191271 round_trippers.go:580]     Audit-Id: 60a65c08-d5cd-4e57-814c-1732c8213de5
	I0522 18:55:40.171338  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:55:40.171744  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.171761  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.171778  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171785  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.172894  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.172909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.172917  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.172923  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.172941  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.172949  191271 round_trippers.go:580]     Audit-Id: 4a8b876d-cd28-40b0-8c7f-d0d3dcdf9a8a
	I0522 18:55:40.172954  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.172960  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.173055  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.173292  191271 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.173304  191271 pod_ready.go:81] duration metric: took 4.667435ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.173312  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.250373  191271 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0522 18:55:40.253545  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.930348316s)
	I0522 18:55:40.253673  191271 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:55:40.253686  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.253693  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.253697  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.255762  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.255783  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.255791  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.255797  191271 round_trippers.go:580]     Content-Length: 1274
	I0522 18:55:40.255802  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.255806  191271 round_trippers.go:580]     Audit-Id: 29bedd89-fc34-4a6e-af90-ad42da35c8fd
	I0522 18:55:40.255818  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.255822  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.255827  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.255874  191271 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0522 18:55:40.256474  191271 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.256539  191271 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:55:40.256552  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.256570  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.256575  191271 round_trippers.go:473]     Content-Type: application/json
	I0522 18:55:40.256579  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.259420  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.259441  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.259451  191271 round_trippers.go:580]     Audit-Id: 82c4354f-9c05-40ba-a5a3-b7a52e45d257
	I0522 18:55:40.259456  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.259460  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.259463  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.259466  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.259468  191271 round_trippers.go:580]     Content-Length: 1220
	I0522 18:55:40.259471  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.259529  191271 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.349664  191271 request.go:629] Waited for 176.30464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349765  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349773  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.349781  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.349789  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.351637  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.351668  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.351677  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.351681  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.351685  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.351689  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.351694  191271 round_trippers.go:580]     Audit-Id: e542db4a-5526-4b20-9370-c944caf3811a
	I0522 18:55:40.351699  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.351857  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:55:40.548957  191271 request.go:629] Waited for 196.618236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549041  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549047  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.549058  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.549067  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.551086  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.551107  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.551116  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.551122  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.551129  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.551133  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.551139  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.551143  191271 round_trippers.go:580]     Audit-Id: 1e1c2f19-5fcc-4420-a18e-31b43efa6830
	I0522 18:55:40.551354  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.551644  191271 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.551661  191271 pod_ready.go:81] duration metric: took 378.342194ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.551670  191271 pod_ready.go:38] duration metric: took 603.166138ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:40.551692  191271 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:55:40.551735  191271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:55:40.607738  191271 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0522 18:55:40.607762  191271 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0522 18:55:40.607769  191271 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607776  191271 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607780  191271 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0522 18:55:40.607785  191271 command_runner.go:130] > pod/storage-provisioner configured
	I0522 18:55:40.607801  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23414082s)
	I0522 18:55:40.607851  191271 command_runner.go:130] > 1914
	I0522 18:55:40.607887  191271 api_server.go:72] duration metric: took 3.45820997s to wait for apiserver process to appear ...
	I0522 18:55:40.610759  191271 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:55:40.607897  191271 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:55:40.611942  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:40.611952  191271 addons.go:505] duration metric: took 3.462227154s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:55:40.615330  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:55:40.615348  191271 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:55:41.112944  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:41.117073  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:55:41.117157  191271 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:55:41.117169  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.117179  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.117183  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.118187  191271 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:55:41.118209  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.118219  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.118225  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.118233  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.118238  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.118246  191271 round_trippers.go:580]     Content-Length: 263
	I0522 18:55:41.118249  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.118253  191271 round_trippers.go:580]     Audit-Id: 3e08a892-73f4-4e1c-b61f-1d2036a1b85f
	I0522 18:55:41.118286  191271 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:55:41.118396  191271 api_server.go:141] control plane version: v1.30.1
	I0522 18:55:41.118421  191271 api_server.go:131] duration metric: took 506.49152ms to wait for apiserver health ...
	I0522 18:55:41.118429  191271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:55:41.118489  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.118499  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.118508  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.118517  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.121750  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:41.121770  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.121780  191271 round_trippers.go:580]     Audit-Id: d8ba21dd-cd96-4231-9557-114d06d5b330
	I0522 18:55:41.121787  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.121802  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.121807  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.121824  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.121832  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.122518  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.124999  191271 system_pods.go:59] 8 kube-system pods found
	I0522 18:55:41.125049  191271 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.125064  191271 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.125078  191271 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.125095  191271 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.125108  191271 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.125123  191271 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.125135  191271 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.125143  191271 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.125153  191271 system_pods.go:74] duration metric: took 6.71923ms to wait for pod list to return data ...
	I0522 18:55:41.125171  191271 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:55:41.125259  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:55:41.125270  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.125279  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.125284  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.127424  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.127447  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.127456  191271 round_trippers.go:580]     Audit-Id: a747672b-a20b-4af5-ade4-ea4b67829eed
	I0522 18:55:41.127461  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.127465  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.127482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.127491  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.127494  191271 round_trippers.go:580]     Content-Length: 262
	I0522 18:55:41.127497  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.127525  191271 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:55:41.127713  191271 default_sa.go:45] found service account: "default"
	I0522 18:55:41.127733  191271 default_sa.go:55] duration metric: took 2.553683ms for default service account to be created ...
	I0522 18:55:41.127742  191271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:55:41.149070  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.149091  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.149101  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.149106  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.153123  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:41.153141  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.153148  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.153151  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.153153  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.153156  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.153158  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.153161  191271 round_trippers.go:580]     Audit-Id: 2b84be8b-87e2-4071-b787-4728703fa23e
	I0522 18:55:41.154286  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.157071  191271 system_pods.go:86] 8 kube-system pods found
	I0522 18:55:41.157102  191271 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.157114  191271 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.157125  191271 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.157143  191271 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.157161  191271 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.157175  191271 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.157188  191271 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.157216  191271 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.157231  191271 system_pods.go:126] duration metric: took 29.478851ms to wait for k8s-apps to be running ...
	I0522 18:55:41.157247  191271 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:55:41.157295  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:55:41.169086  191271 system_svc.go:56] duration metric: took 11.831211ms WaitForService to wait for kubelet
	I0522 18:55:41.169113  191271 kubeadm.go:576] duration metric: took 4.019434744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:41.169134  191271 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:55:41.349440  191271 request.go:629] Waited for 180.210127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349507  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349515  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.349525  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.349532  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.352161  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.352182  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.352190  191271 round_trippers.go:580]     Audit-Id: 7838a856-6baa-4e95-bfcc-54203cf8503d
	I0522 18:55:41.352195  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.352201  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.352206  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.352219  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.352223  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.352341  191271 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 5264 chars]
	I0522 18:55:41.352807  191271 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:55:41.352842  191271 node_conditions.go:123] node cpu capacity is 8
	I0522 18:55:41.352854  191271 node_conditions.go:105] duration metric: took 183.714016ms to run NodePressure ...
	I0522 18:55:41.352869  191271 start.go:240] waiting for startup goroutines ...
	I0522 18:55:41.352879  191271 start.go:245] waiting for cluster config update ...
	I0522 18:55:41.352892  191271 start.go:254] writing updated cluster config ...
	I0522 18:55:41.354996  191271 out.go:177] 
	I0522 18:55:41.356517  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:41.356594  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.358237  191271 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:55:41.359528  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:41.360862  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:41.362023  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:41.362046  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:41.362122  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:41.362131  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:41.362129  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:41.362226  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.379872  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:41.379903  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:41.379925  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:41.379963  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:41.380037  191271 start.go:364] duration metric: took 46.895µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:41.380065  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:41.380079  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:41.380381  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.396179  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Stopped err=<nil>
	W0522 18:55:41.396216  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:41.398486  191271 out.go:177] * Restarting existing docker container for "multinode-737786-m02" ...
	I0522 18:55:41.399852  191271 cli_runner.go:164] Run: docker start multinode-737786-m02
	I0522 18:55:41.739232  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.759108  191271 kic.go:430] container "multinode-737786-m02" state is running.
	I0522 18:55:41.759532  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:41.781733  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:41.781802  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:41.800873  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	W0522 18:55:41.801752  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.801799  191271 retry.go:31] will retry after 178.387586ms: ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	W0522 18:55:41.981559  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.981589  191271 retry.go:31] will retry after 356.566239ms: ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:42.427258  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:55:42.427545  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:42.431134  191271 command_runner.go:130] > 213G
	I0522 18:55:42.431367  191271 fix.go:56] duration metric: took 1.051284151s for fixHost
	I0522 18:55:42.431388  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1.051331877s
	W0522 18:55:42.431406  191271 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:55:42.431491  191271 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:55:42.431503  191271 start.go:728] Will try again in 5 seconds ...
	I0522 18:55:47.432624  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:47.432726  191271 start.go:364] duration metric: took 68.501µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:47.432755  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:47.432767  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:47.433049  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:47.449030  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Running err=<nil>
	W0522 18:55:47.449055  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:47.451390  191271 out.go:177] * Updating the running docker "multinode-737786-m02" container ...
	I0522 18:55:47.452545  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:47.452614  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.468745  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.468930  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.468943  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:47.578455  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.578487  191271 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:55:47.578548  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.595125  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.595343  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.595360  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:55:47.721219  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.721292  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.737411  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.737578  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.737594  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:47.850975  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:47.851002  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:47.851027  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:47.851040  191271 provision.go:84] configureAuth start
	I0522 18:55:47.851098  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.865910  191271 provision.go:87] duration metric: took 14.860061ms to configureAuth
	W0522 18:55:47.865931  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.865957  191271 retry.go:31] will retry after 87.876µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.867083  191271 provision.go:84] configureAuth start
	I0522 18:55:47.867151  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.883908  191271 provision.go:87] duration metric: took 16.806772ms to configureAuth
	W0522 18:55:47.883927  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.883942  191271 retry.go:31] will retry after 102.785µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.885049  191271 provision.go:84] configureAuth start
	I0522 18:55:47.885127  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.899850  191271 provision.go:87] duration metric: took 14.775266ms to configureAuth
	W0522 18:55:47.899866  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.899883  191271 retry.go:31] will retry after 127.962µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.900992  191271 provision.go:84] configureAuth start
	I0522 18:55:47.901044  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.915918  191271 provision.go:87] duration metric: took 14.910204ms to configureAuth
	W0522 18:55:47.915936  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.915950  191271 retry.go:31] will retry after 176.177µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.917057  191271 provision.go:84] configureAuth start
	I0522 18:55:47.917110  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.933132  191271 provision.go:87] duration metric: took 16.057912ms to configureAuth
	W0522 18:55:47.933147  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.933162  191271 retry.go:31] will retry after 415.738µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.934277  191271 provision.go:84] configureAuth start
	I0522 18:55:47.934340  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.949561  191271 provision.go:87] duration metric: took 15.2663ms to configureAuth
	W0522 18:55:47.949578  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.949593  191271 retry.go:31] will retry after 695.271µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.950702  191271 provision.go:84] configureAuth start
	I0522 18:55:47.950753  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.965237  191271 provision.go:87] duration metric: took 14.518838ms to configureAuth
	W0522 18:55:47.965256  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.965273  191271 retry.go:31] will retry after 624.889µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.966387  191271 provision.go:84] configureAuth start
	I0522 18:55:47.966449  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.981238  191271 provision.go:87] duration metric: took 14.830065ms to configureAuth
	W0522 18:55:47.981257  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.981273  191271 retry.go:31] will retry after 1.057459ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.982393  191271 provision.go:84] configureAuth start
	I0522 18:55:47.982466  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.998674  191271 provision.go:87] duration metric: took 16.255395ms to configureAuth
	W0522 18:55:47.998692  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.998712  191271 retry.go:31] will retry after 2.801269ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.001909  191271 provision.go:84] configureAuth start
	I0522 18:55:48.001983  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.017417  191271 provision.go:87] duration metric: took 15.487122ms to configureAuth
	W0522 18:55:48.017438  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.017457  191271 retry.go:31] will retry after 2.6692ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.020641  191271 provision.go:84] configureAuth start
	I0522 18:55:48.020707  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.035890  191271 provision.go:87] duration metric: took 15.231178ms to configureAuth
	W0522 18:55:48.035907  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.035925  191271 retry.go:31] will retry after 4.913205ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.041121  191271 provision.go:84] configureAuth start
	I0522 18:55:48.041190  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.056341  191271 provision.go:87] duration metric: took 15.201859ms to configureAuth
	W0522 18:55:48.056358  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.056374  191271 retry.go:31] will retry after 8.73344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.065553  191271 provision.go:84] configureAuth start
	I0522 18:55:48.065620  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.080469  191271 provision.go:87] duration metric: took 14.898331ms to configureAuth
	W0522 18:55:48.080489  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.080506  191271 retry.go:31] will retry after 13.355259ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.094679  191271 provision.go:84] configureAuth start
	I0522 18:55:48.094748  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.109923  191271 provision.go:87] duration metric: took 15.225024ms to configureAuth
	W0522 18:55:48.109942  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.109959  191271 retry.go:31] will retry after 17.591086ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.128159  191271 provision.go:84] configureAuth start
	I0522 18:55:48.128244  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.143258  191271 provision.go:87] duration metric: took 15.081459ms to configureAuth
	W0522 18:55:48.143309  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.143328  191271 retry.go:31] will retry after 30.694182ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.174523  191271 provision.go:84] configureAuth start
	I0522 18:55:48.174643  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.190339  191271 provision.go:87] duration metric: took 15.791254ms to configureAuth
	W0522 18:55:48.190355  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.190371  191271 retry.go:31] will retry after 60.478865ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.251580  191271 provision.go:84] configureAuth start
	I0522 18:55:48.251680  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.267446  191271 provision.go:87] duration metric: took 15.839853ms to configureAuth
	W0522 18:55:48.267466  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.267484  191271 retry.go:31] will retry after 63.884927ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.331706  191271 provision.go:84] configureAuth start
	I0522 18:55:48.331794  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.347085  191271 provision.go:87] duration metric: took 15.328539ms to configureAuth
	W0522 18:55:48.347105  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.347122  191271 retry.go:31] will retry after 87.655661ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.435332  191271 provision.go:84] configureAuth start
	I0522 18:55:48.435425  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.451751  191271 provision.go:87] duration metric: took 16.388799ms to configureAuth
	W0522 18:55:48.451774  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.451793  191271 retry.go:31] will retry after 195.353755ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.648137  191271 provision.go:84] configureAuth start
	I0522 18:55:48.648216  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.663505  191271 provision.go:87] duration metric: took 15.339444ms to configureAuth
	W0522 18:55:48.663523  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.663539  191271 retry.go:31] will retry after 289.097561ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.952931  191271 provision.go:84] configureAuth start
	I0522 18:55:48.953045  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.968997  191271 provision.go:87] duration metric: took 16.035059ms to configureAuth
	W0522 18:55:48.969019  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.969037  191271 retry.go:31] will retry after 186.761832ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.156383  191271 provision.go:84] configureAuth start
	I0522 18:55:49.156459  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.173159  191271 provision.go:87] duration metric: took 16.748544ms to configureAuth
	W0522 18:55:49.173181  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.173199  191271 retry.go:31] will retry after 327.938905ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.501699  191271 provision.go:84] configureAuth start
	I0522 18:55:49.501785  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.517950  191271 provision.go:87] duration metric: took 16.220449ms to configureAuth
	W0522 18:55:49.517970  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.517987  191271 retry.go:31] will retry after 817.802375ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.336261  191271 provision.go:84] configureAuth start
	I0522 18:55:50.336358  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:50.352199  191271 provision.go:87] duration metric: took 15.908402ms to configureAuth
	W0522 18:55:50.352217  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.352235  191271 retry.go:31] will retry after 975.249665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.327901  191271 provision.go:84] configureAuth start
	I0522 18:55:51.327997  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:51.343571  191271 provision.go:87] duration metric: took 15.641557ms to configureAuth
	W0522 18:55:51.343589  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.343604  191271 retry.go:31] will retry after 1.511582383s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.855327  191271 provision.go:84] configureAuth start
	I0522 18:55:52.855421  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:52.874130  191271 provision.go:87] duration metric: took 18.776068ms to configureAuth
	W0522 18:55:52.874152  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.874173  191271 retry.go:31] will retry after 2.587827778s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.462838  191271 provision.go:84] configureAuth start
	I0522 18:55:55.462920  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:55.479954  191271 provision.go:87] duration metric: took 17.080473ms to configureAuth
	W0522 18:55:55.479973  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.479992  191271 retry.go:31] will retry after 4.788436213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.268555  191271 provision.go:84] configureAuth start
	I0522 18:56:00.268664  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:00.284768  191271 provision.go:87] duration metric: took 16.187921ms to configureAuth
	W0522 18:56:00.284787  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.284804  191271 retry.go:31] will retry after 4.16940433s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.458082  191271 provision.go:84] configureAuth start
	I0522 18:56:04.458158  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:04.474138  191271 provision.go:87] duration metric: took 16.031529ms to configureAuth
	W0522 18:56:04.474155  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.474171  191271 retry.go:31] will retry after 11.936949428s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.411971  191271 provision.go:84] configureAuth start
	I0522 18:56:16.412062  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:16.427556  191271 provision.go:87] duration metric: took 15.558638ms to configureAuth
	W0522 18:56:16.427574  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.427592  191271 retry.go:31] will retry after 9.484561192s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.912297  191271 provision.go:84] configureAuth start
	I0522 18:56:25.912384  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:25.927852  191271 provision.go:87] duration metric: took 15.527116ms to configureAuth
	W0522 18:56:25.927874  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.927894  191271 retry.go:31] will retry after 27.958237861s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.888233  191271 provision.go:84] configureAuth start
	I0522 18:56:53.888316  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:53.906509  191271 provision.go:87] duration metric: took 18.250582ms to configureAuth
	W0522 18:56:53.906529  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.906545  191271 retry.go:31] will retry after 38.774225348s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.682746  191271 provision.go:84] configureAuth start
	I0522 18:57:32.682888  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:57:32.700100  191271 provision.go:87] duration metric: took 17.312123ms to configureAuth
	W0522 18:57:32.700120  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700141  191271 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700149  191271 machine.go:97] duration metric: took 1m45.247591588s to provisionDockerMachine
	I0522 18:57:32.700204  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:57:32.700240  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:57:32.716059  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:57:32.795615  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:57:32.795930  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:57:32.799643  191271 command_runner.go:130] > 213G
	I0522 18:57:32.799841  191271 fix.go:56] duration metric: took 1m45.367071968s for fixHost
	I0522 18:57:32.799861  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m45.367119086s
	W0522 18:57:32.799939  191271 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.801845  191271 out.go:177] 
	W0522 18:57:32.802985  191271 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:57:32.802997  191271 out.go:239] * 
	W0522 18:57:32.803803  191271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:57:32.805200  191271 out.go:177] 
	
	
	==> Docker <==
	May 22 18:55:36 multinode-737786 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Start docker client with request timeout 0s"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Hairpin mode is set to hairpin-veth"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Loaded network plugin cni"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Docker cri networking managed by network plugin cni"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Setting cgroupDriver cgroupfs"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Start cri-dockerd grpc backend"
	May 22 18:55:36 multinode-737786 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-7zbr8_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7fefb8ab9046a93fa90099406fe22d3ab5b99d1e81ed91b35c2e7790f7cd2c3c\""
	May 22 18:55:36 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:36Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ada6e7b25c53306480ec3268f02ae3c0a31843cb50792174aefef87684d072cd\""
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/2558846c3bbbbf87e93dd3aeb7b7261d3f13942bfe05699803c3b8aac20f7e85/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/74a359ee9dc7609983bfa8ac08fe4d45b153467414c884d024082e864b5170f6/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b6f81208c49be20c2ce466f1d45caff3944731d4d6d47de580685eab70a7397/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd5e5467e43217c5e999d05af37ed4a9d45b01e53e6f10773150099d220720d7/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:40 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fb1d360112edd5f1fefe695c76c60c4bcb6ff37c4ff1d3557141f077bc1d13ec/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f7044f4a3341c31c26a26c9a54148b5edf783501f39de034de125ea0756da88/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6b2b3d758240c7c593442266ca02c7d49dce426e0b92147a72b5a13d59d90d0/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:56:11 multinode-737786 dockerd[960]: time="2024-05-22T18:56:11.560442726Z" level=info msg="ignoring event" container=11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	2775772a4970a       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   6f7044f4a3341       storage-provisioner
	513df62eec3d7       cbb01a7bd410d                                                                                         About a minute ago   Running             coredns                   2                   635f4e9d5f8f1       coredns-7db6d8ff4d-jhsz9
	ca4e4fb6fa63f       8c811b4aec35f                                                                                         About a minute ago   Running             busybox                   1                   a6b52bbcc47a8       busybox-fc5497c4f-7zbr8
	43dd6bc557dd6       ac1c61439df46                                                                                         About a minute ago   Running             kindnet-cni               1                   a6b2b3d758240       kindnet-qpfbl
	11bb4599579bf       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   6f7044f4a3341       storage-provisioner
	9e66337e0a3b0       747097150317f                                                                                         About a minute ago   Running             kube-proxy                1                   fb1d360112edd       kube-proxy-kqtgj
	f57ae12003854       25a1387cdab82                                                                                         About a minute ago   Running             kube-controller-manager   1                   fd5e5467e4321       kube-controller-manager-multinode-737786
	495d862fbc889       91be940803172                                                                                         About a minute ago   Running             kube-apiserver            1                   7b6f81208c49b       kube-apiserver-multinode-737786
	94cf43c9c1855       a52dc94f0a912                                                                                         About a minute ago   Running             kube-scheduler            1                   74a359ee9dc76       kube-scheduler-multinode-737786
	eefaf11c384e1       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      1                   2558846c3bbbb       etcd-multinode-737786
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   21 minutes ago       Exited              busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         24 minutes ago       Exited              coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              24 minutes ago       Exited              kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	4394527287d9e       747097150317f                                                                                         24 minutes ago       Exited              kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         25 minutes ago       Exited              kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         25 minutes ago       Exited              etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         25 minutes ago       Exited              kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         25 minutes ago       Exited              kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [513df62eec3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59311 - 41845 "HINFO IN 6854891090202188984.7957026021720121455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009982044s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[445986774]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[445986774]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[445986774]: [30.001125532s] [30.001125532s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1234663045]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[1234663045]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:56:11.806)
	Trace[1234663045]: [30.001264536s] [30.001264536s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[889784802]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[889784802]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[889784802]: [30.001227605s] [30.001227605s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:57:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fec5e25fede4a85b02ed21e485f5a15
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     24m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      24m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 24m                  kube-proxy       
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  NodeAllocatableEnforced  24m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  24m                  kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m                  kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m                  kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  Starting                 24m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           24m                  node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x7 over 117s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  117s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           101s                 node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000110] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +1.009162] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000007] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.004064] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000005] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +2.011784] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000023] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000004] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +4.063705] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000007] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +8.187381] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000015] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	{"level":"info","ts":"2024-05-22T18:52:33.678754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1153}
	{"level":"info","ts":"2024-05-22T18:52:33.681122Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1153,"took":"2.100554ms","hash":435437424,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:52:33.681165Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435437424,"revision":1153,"compact-revision":911}
	{"level":"info","ts":"2024-05-22T18:55:19.441272Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:55:19.441345Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:55:19.441469Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.441514Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.443085Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.443188Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:55:19.454136Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-05-22T18:55:19.456177Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:19.456334Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:19.456374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> etcd [eefaf11c384e] <==
	{"level":"info","ts":"2024-05-22T18:55:37.977587Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:55:37.977682Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:55:37.977702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:55:38.043619Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.043748Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.043762Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.048062Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:55:38.048115Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:38.048236Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:38.05031Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:55:38.050381Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:55:38.967217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.96734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.969829Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:55:38.969867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.969858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.970074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.970142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.971872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-05-22T18:55:38.971922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:57:33 up  1:39,  0 users,  load average: 0.22, 0.33, 0.33
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [43dd6bc557dd] <==
	I0522 18:55:41.953622       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0522 18:55:42.445615       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:42.445657       1 main.go:227] handling current node
	I0522 18:55:52.459768       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:52.459791       1 main.go:227] handling current node
	I0522 18:56:02.471114       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:02.471137       1 main.go:227] handling current node
	I0522 18:56:12.474562       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:12.474592       1 main.go:227] handling current node
	I0522 18:56:22.485820       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:22.485842       1 main.go:227] handling current node
	I0522 18:56:32.489641       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:32.489663       1 main.go:227] handling current node
	I0522 18:56:42.501493       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:42.501517       1 main.go:227] handling current node
	I0522 18:56:52.505181       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:56:52.505203       1 main.go:227] handling current node
	I0522 18:57:02.517123       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:02.517149       1 main.go:227] handling current node
	I0522 18:57:12.520182       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:12.520205       1 main.go:227] handling current node
	I0522 18:57:22.532061       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:22.532090       1 main.go:227] handling current node
	I0522 18:57:32.535480       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:32.535503       1 main.go:227] handling current node
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:53:16.849062       1 main.go:227] handling current node
	I0522 18:53:26.852270       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:26.852292       1 main.go:227] handling current node
	I0522 18:53:36.861628       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:36.861651       1 main.go:227] handling current node
	I0522 18:53:46.865179       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:46.865201       1 main.go:227] handling current node
	I0522 18:53:56.868146       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:56.868167       1 main.go:227] handling current node
	I0522 18:54:06.871251       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:06.871301       1 main.go:227] handling current node
	I0522 18:54:16.877176       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:16.877198       1 main.go:227] handling current node
	I0522 18:54:26.880323       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:26.880354       1 main.go:227] handling current node
	I0522 18:54:36.882866       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:36.882888       1 main.go:227] handling current node
	I0522 18:54:46.886203       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:46.886223       1 main.go:227] handling current node
	I0522 18:54:56.888938       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:56.888961       1 main.go:227] handling current node
	I0522 18:55:06.893856       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:06.893878       1 main.go:227] handling current node
	I0522 18:55:16.902298       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:16.902328       1 main.go:227] handling current node
	
	
	==> kube-apiserver [495d862fbc88] <==
	I0522 18:55:39.878184       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0522 18:55:39.879367       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0522 18:55:39.878229       1 naming_controller.go:291] Starting NamingConditionController
	I0522 18:55:39.878252       1 controller.go:139] Starting OpenAPI controller
	I0522 18:55:39.878272       1 controller.go:87] Starting OpenAPI V3 controller
	I0522 18:55:40.047494       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:55:40.047771       1 policy_source.go:224] refreshing policies
	I0522 18:55:40.048571       1 shared_informer.go:320] Caches are synced for configmaps
	I0522 18:55:40.051831       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:55:40.057053       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0522 18:55:40.057086       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 18:55:40.057103       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0522 18:55:40.057166       1 aggregator.go:165] initial CRD sync complete...
	I0522 18:55:40.057226       1 autoregister_controller.go:141] Starting autoregister controller
	I0522 18:55:40.057259       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0522 18:55:40.057286       1 cache.go:39] Caches are synced for autoregister controller
	I0522 18:55:40.057314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0522 18:55:40.057291       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0522 18:55:40.058812       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:55:40.062976       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0522 18:55:40.073809       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0522 18:55:40.148728       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0522 18:55:40.880951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:55:53.121182       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:55:53.171410       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [6991b35c6800] <==
	W0522 18:55:28.873894       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.877263       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.903981       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.935678       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.952488       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.962287       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.966797       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.983853       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.993570       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.022783       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.048399       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.069672       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.110404       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.124921       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.158271       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.170885       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.200867       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.291229       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.307325       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.329710       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.376916       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.387751       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.387856       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.407058       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.465132       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-controller-manager [f57ae1200385] <==
	I0522 18:55:52.857093       1 shared_informer.go:320] Caches are synced for daemon sets
	I0522 18:55:52.858302       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:55:52.861613       1 shared_informer.go:320] Caches are synced for disruption
	I0522 18:55:52.862744       1 shared_informer.go:320] Caches are synced for stateful set
	I0522 18:55:52.868272       1 shared_informer.go:320] Caches are synced for attach detach
	I0522 18:55:52.868302       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0522 18:55:52.868326       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0522 18:55:52.868368       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0522 18:55:52.869529       1 shared_informer.go:320] Caches are synced for expand
	I0522 18:55:52.876074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.118981ms"
	I0522 18:55:52.876376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.821µs"
	I0522 18:55:52.907328       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0522 18:55:52.918533       1 shared_informer.go:320] Caches are synced for crt configmap
	I0522 18:55:52.953194       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0522 18:55:52.966173       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:52.967979       1 shared_informer.go:320] Caches are synced for job
	I0522 18:55:52.972293       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:53.014552       1 shared_informer.go:320] Caches are synced for cronjob
	I0522 18:55:53.051331       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0522 18:55:53.055811       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 18:55:53.485614       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518148       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:56:15.529637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.104196ms"
	I0522 18:56:15.529730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.444µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9e66337e0a3b] <==
	I0522 18:55:41.578062       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:55:41.643800       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:55:41.666145       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:55:41.666189       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:55:41.668333       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:55:41.668357       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:55:41.668379       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:55:41.668660       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:55:41.668683       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:41.669565       1 config.go:192] "Starting service config controller"
	I0522 18:55:41.669588       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:55:41.669604       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:55:41.669612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:55:41.669709       1 config.go:319] "Starting node config controller"
	I0522 18:55:41.669715       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:55:41.770605       1 shared_informer.go:320] Caches are synced for node config
	I0522 18:55:41.770630       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:55:41.770656       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [94cf43c9c185] <==
	I0522 18:55:38.604832       1 serving.go:380] Generated self-signed cert in-memory
	W0522 18:55:39.946054       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0522 18:55:39.946095       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0522 18:55:39.946107       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0522 18:55:39.946116       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0522 18:55:39.960130       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0522 18:55:39.960159       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:39.962719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0522 18:55:39.962851       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0522 18:55:39.962872       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:55:39.962893       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0522 18:55:40.163188       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:55:19.470767       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0522 18:55:19.470986       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.852993    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75-xtables-lock\") pod \"kube-proxy-kqtgj\" (UID: \"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75\") " pod="kube-system/kube-proxy-kqtgj"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853021    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853045    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e454b0cd-e618-4268-8882-69d2a4544917-lib-modules\") pod \"kindnet-qpfbl\" (UID: \"e454b0cd-e618-4268-8882-69d2a4544917\") " pod="kube-system/kindnet-qpfbl"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853133    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75-lib-modules\") pod \"kube-proxy-kqtgj\" (UID: \"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75\") " pod="kube-system/kube-proxy-kqtgj"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.263222    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7044f4a3341c31c26a26c9a54148b5edf783501f39de034de125ea0756da88"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.269254    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1d360112edd5f1fefe695c76c60c4bcb6ff37c4ff1d3557141f077bc1d13ec"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.556834    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.565710    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.577338    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b2b3d758240c7c593442266ca02c7d49dce426e0b92147a72b5a13d59d90d0"
	May 22 18:55:43 multinode-737786 kubelet[1392]: I0522 18:55:43.640639    1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:55:45 multinode-737786 kubelet[1392]: I0522 18:55:45.511732    1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:55:46 multinode-737786 kubelet[1392]: E0522 18:55:46.826592    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:55:46 multinode-737786 kubelet[1392]: E0522 18:55:46.826655    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:55:56 multinode-737786 kubelet[1392]: E0522 18:55:56.845259    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:55:56 multinode-737786 kubelet[1392]: E0522 18:55:56.845308    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:06 multinode-737786 kubelet[1392]: E0522 18:56:06.868571    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:06 multinode-737786 kubelet[1392]: E0522 18:56:06.868605    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:11 multinode-737786 kubelet[1392]: I0522 18:56:11.854232    1392 scope.go:117] "RemoveContainer" containerID="16cb7c11afec8ec9106f148ae63dd8087aa03a7f81026fff036097da39aab0cb"
	May 22 18:56:11 multinode-737786 kubelet[1392]: I0522 18:56:11.854547    1392 scope.go:117] "RemoveContainer" containerID="11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b"
	May 22 18:56:11 multinode-737786 kubelet[1392]: E0522 18:56:11.854826    1392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d953629-c86b-47be-84da-baa3bdf24d2e)\"" pod="kube-system/storage-provisioner" podUID="5d953629-c86b-47be-84da-baa3bdf24d2e"
	May 22 18:56:16 multinode-737786 kubelet[1392]: E0522 18:56:16.885020    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:16 multinode-737786 kubelet[1392]: E0522 18:56:16.885053    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:23 multinode-737786 kubelet[1392]: I0522 18:56:23.858356    1392 scope.go:117] "RemoveContainer" containerID="11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b"
	May 22 18:56:26 multinode-737786 kubelet[1392]: E0522 18:56:26.903258    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:26 multinode-737786 kubelet[1392]: E0522 18:56:26.903337    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	
	
	==> storage-provisioner [11bb4599579b] <==
	I0522 18:55:41.545271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0522 18:56:11.548634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [2775772a4970] <==
	I0522 18:56:23.937856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:56:23.945592       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:56:23.945661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:56:41.339602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:56:41.339666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec became leader
	I0522 18:56:41.339738       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec!
	I0522 18:56:41.439960       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartKeepsNodes]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  5m55s (x4 over 21m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  114s                 default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (137.53s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (107.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 node delete m03
E0522 18:58:47.894676   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
multinode_test.go:416: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 node delete m03: exit status 80 (1m45.950876101s)

                                                
                                                
-- stdout --
	* Deleting node m03 from cluster multinode-737786
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0522 18:59:20.518784  197133 node.go:177] kubectl delete node "multinode-737786-m03" failed: nodes "multinode-737786-m03" not found
	X Exiting due to GUEST_NODE_DELETE: deleting node: nodes "multinode-737786-m03" not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_494011a6b05fec7d81170870a2aee2ef446d16a4_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:418: node delete returned an error. args "out/minikube-linux-amd64 -p multinode-737786 node delete m03": exit status 80
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr
multinode_test.go:422: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr: exit status 7 (312.479687ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-737786-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	
	multinode-737786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:59:20.566083  197963 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:59:20.566353  197963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:20.566363  197963 out.go:304] Setting ErrFile to fd 2...
	I0522 18:59:20.566369  197963 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:20.566536  197963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:59:20.566718  197963 out.go:298] Setting JSON to false
	I0522 18:59:20.566750  197963 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:59:20.566842  197963 notify.go:220] Checking for updates...
	I0522 18:59:20.567160  197963 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:20.567177  197963 status.go:255] checking status of multinode-737786 ...
	I0522 18:59:20.567699  197963 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:20.585438  197963 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:59:20.585482  197963 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:20.585747  197963 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:20.600786  197963 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:20.601001  197963 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:59:20.601032  197963 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:20.616773  197963 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:20.700014  197963 ssh_runner.go:195] Run: systemctl --version
	I0522 18:59:20.703614  197963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:59:20.713344  197963 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:59:20.761735  197963 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:59:20.752747779 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:59:20.762261  197963 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:59:20.762288  197963 api_server.go:166] Checking apiserver status ...
	I0522 18:59:20.762322  197963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:59:20.772639  197963 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1914/cgroup
	I0522 18:59:20.780560  197963 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/495d862fbc88911435c6149cc765ad18b25f4a16c9c87501027544b170987a9f"
	I0522 18:59:20.780617  197963 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/495d862fbc88911435c6149cc765ad18b25f4a16c9c87501027544b170987a9f/freezer.state
	I0522 18:59:20.787766  197963 api_server.go:204] freezer state: "THAWED"
	I0522 18:59:20.787793  197963 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:20.791232  197963 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:59:20.791253  197963 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:59:20.791263  197963 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:59:20.791317  197963 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:59:20.791541  197963 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:20.808194  197963 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:59:20.808212  197963 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:59:20.808481  197963 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:59:20.824216  197963 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:59:20.824248  197963 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:59:20.824264  197963 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:59:20.824273  197963 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:59:20.824513  197963 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:59:20.839296  197963 status.go:330] multinode-737786-m03 host status = "Stopped" (err=<nil>)
	I0522 18:59:20.839315  197963 status.go:343] host is not running, skipping remaining checks
	I0522 18:59:20.839321  197963 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:424: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 191553,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:55:30.428700973Z",
	            "FinishedAt": "2024-05-22T18:55:29.739597027Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e5d9c4f018f85e131e1e3e35160c3be5874cc3e9e983a114ff800193704e1cf",
	            "SandboxKey": "/var/run/docker/netns/5e5d9c4f018f",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32927"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32926"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32923"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32925"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32924"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "db5b9713a729684619c46904638292c75dda74a2b3239964bd21c539163cbff6",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/DeleteNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/DeleteNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/DeleteNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-737786 node stop m03                                                          | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	| node    | multinode-737786 node start                                                             | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	| stop    | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC | 22 May 24 18:55 UTC |
	| start   | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:57 UTC |                     |
	| node    | multinode-737786 node delete                                                            | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:57 UTC |                     |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:55:30
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:55:30.016582  191271 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:55:30.016705  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016722  191271 out.go:304] Setting ErrFile to fd 2...
	I0522 18:55:30.016730  191271 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:55:30.016907  191271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:55:30.017442  191271 out.go:298] Setting JSON to false
	I0522 18:55:30.018352  191271 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":5874,"bootTime":1716398256,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:55:30.018407  191271 start.go:139] virtualization: kvm guest
	I0522 18:55:30.020609  191271 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:55:30.022032  191271 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:55:30.023205  191271 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:55:30.022039  191271 notify.go:220] Checking for updates...
	I0522 18:55:30.024646  191271 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:30.025941  191271 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:55:30.027248  191271 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:55:30.028476  191271 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:55:30.030067  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:30.030140  191271 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:55:30.051240  191271 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:55:30.051370  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.102381  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.093628495 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.102490  191271 docker.go:295] overlay module found
	I0522 18:55:30.104504  191271 out.go:177] * Using the docker driver based on existing profile
	I0522 18:55:30.105610  191271 start.go:297] selected driver: docker
	I0522 18:55:30.105625  191271 start.go:901] validating driver "docker" against &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 C
ertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.105706  191271 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:55:30.105775  191271 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:55:30.148150  191271 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:32 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:55:30.139765007 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:55:30.149022  191271 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:30.149059  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:30.149071  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:30.149133  191271 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:30.151019  191271 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:55:30.152138  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:30.153345  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:30.154404  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:30.154431  191271 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:55:30.154440  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:30.154497  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:30.154509  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:30.154516  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:30.154599  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.169685  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:30.169705  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:30.169727  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:30.169758  191271 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:30.169840  191271 start.go:364] duration metric: took 44.168µs to acquireMachinesLock for "multinode-737786"
	I0522 18:55:30.169862  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:30.169876  191271 fix.go:54] fixHost starting: 
	I0522 18:55:30.170113  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.186497  191271 fix.go:112] recreateIfNeeded on multinode-737786: state=Stopped err=<nil>
	W0522 18:55:30.186530  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:30.188329  191271 out.go:177] * Restarting existing docker container for "multinode-737786" ...
	I0522 18:55:30.189575  191271 cli_runner.go:164] Run: docker start multinode-737786
	I0522 18:55:30.434280  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:30.450599  191271 kic.go:430] container "multinode-737786" state is running.
	I0522 18:55:30.450960  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:30.469222  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:30.469408  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:30.469451  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:30.486145  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:30.486342  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:30.486358  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:30.486939  191271 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33870->127.0.0.1:32927: read: connection reset by peer
	I0522 18:55:33.598615  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.598642  191271 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:55:33.598705  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.616028  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.616267  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.616289  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:55:33.737498  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:55:33.737589  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.753768  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:33.753939  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:33.753956  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:33.862867  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:33.862895  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:33.862922  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:33.862933  191271 provision.go:84] configureAuth start
	I0522 18:55:33.862986  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:33.879102  191271 provision.go:143] copyHostCerts
	I0522 18:55:33.879142  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879166  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:55:33.879178  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:55:33.879240  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:55:33.879346  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879366  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:55:33.879370  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:55:33.879398  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:55:33.879456  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879472  191271 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:55:33.879476  191271 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:55:33.879500  191271 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:55:33.879560  191271 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:55:33.981006  191271 provision.go:177] copyRemoteCerts
	I0522 18:55:33.981066  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:55:33.981098  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:33.997545  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.083209  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:55:34.083291  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:55:34.103441  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:55:34.103506  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:55:34.123440  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:55:34.123484  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:55:34.142960  191271 provision.go:87] duration metric: took 280.016987ms to configureAuth
	I0522 18:55:34.142986  191271 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:55:34.143149  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:34.143191  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.159108  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.159288  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.159303  191271 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:55:34.271284  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:55:34.271307  191271 ubuntu.go:71] root file system type: overlay
	I0522 18:55:34.271413  191271 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:55:34.271478  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.287895  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.288060  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.288123  191271 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:55:34.412978  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:55:34.413065  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.429426  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:34.429609  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32927 <nil> <nil>}
	I0522 18:55:34.429634  191271 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:55:34.543660  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:34.543688  191271 machine.go:97] duration metric: took 4.074267152s to provisionDockerMachine
	I0522 18:55:34.543701  191271 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:55:34.543714  191271 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:55:34.543786  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:55:34.543829  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.560130  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.642945  191271 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:55:34.645547  191271 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:55:34.645562  191271 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:55:34.645568  191271 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:55:34.645579  191271 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:55:34.645586  191271 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:55:34.645590  191271 command_runner.go:130] > ID=ubuntu
	I0522 18:55:34.645594  191271 command_runner.go:130] > ID_LIKE=debian
	I0522 18:55:34.645599  191271 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:55:34.645603  191271 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:55:34.645609  191271 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:55:34.645615  191271 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:55:34.645619  191271 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:55:34.645674  191271 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:55:34.645696  191271 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:55:34.645706  191271 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:55:34.645714  191271 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:55:34.645725  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:55:34.645767  191271 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:55:34.645841  191271 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:55:34.645853  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:55:34.645929  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:55:34.653086  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:34.672745  191271 start.go:296] duration metric: took 129.030542ms for postStartSetup
	I0522 18:55:34.672809  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:34.672852  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.688507  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.767346  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:55:34.767631  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:34.771441  191271 command_runner.go:130] > 213G
	I0522 18:55:34.771575  191271 fix.go:56] duration metric: took 4.601701145s for fixHost
	I0522 18:55:34.771595  191271 start.go:83] releasing machines lock for "multinode-737786", held for 4.601740929s
	I0522 18:55:34.771653  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:55:34.787192  191271 ssh_runner.go:195] Run: cat /version.json
	I0522 18:55:34.787232  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.787317  191271 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:55:34.787371  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:34.803468  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.803975  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:34.962314  191271 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:55:34.964188  191271 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:55:34.964307  191271 ssh_runner.go:195] Run: systemctl --version
	I0522 18:55:34.968188  191271 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:55:34.968212  191271 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:55:34.968386  191271 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:55:34.972176  191271 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:55:34.972197  191271 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:34.972207  191271 command_runner.go:130] > Device: 37h/55d	Inode: 1306969     Links: 1
	I0522 18:55:34.972215  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:34.972234  191271 command_runner.go:130] > Access: 2024-05-22 18:32:26.662663204 +0000
	I0522 18:55:34.972243  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972251  191271 command_runner.go:130] > Change: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972259  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:26.638661469 +0000
	I0522 18:55:34.972314  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:55:34.987621  191271 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:55:34.987680  191271 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:55:34.994995  191271 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:55:34.995017  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:34.995044  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:34.995149  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.008015  191271 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:55:35.008981  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:55:35.017393  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:55:35.027698  191271 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.027743  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:55:35.036084  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.044052  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:55:35.052258  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:55:35.060384  191271 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:55:35.067811  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:55:35.075774  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:55:35.083880  191271 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:55:35.091876  191271 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:55:35.098619  191271 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:55:35.098662  191271 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:55:35.105547  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.177710  191271 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:55:35.250942  191271 start.go:494] detecting cgroup driver to use...
	I0522 18:55:35.251038  191271 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:55:35.251122  191271 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:55:35.261334  191271 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:55:35.261354  191271 command_runner.go:130] > [Unit]
	I0522 18:55:35.261362  191271 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:55:35.261370  191271 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:55:35.261375  191271 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:55:35.261384  191271 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:55:35.261391  191271 command_runner.go:130] > Wants=network-online.target
	I0522 18:55:35.261415  191271 command_runner.go:130] > Requires=docker.socket
	I0522 18:55:35.261432  191271 command_runner.go:130] > StartLimitBurst=3
	I0522 18:55:35.261443  191271 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:55:35.261451  191271 command_runner.go:130] > [Service]
	I0522 18:55:35.261457  191271 command_runner.go:130] > Type=notify
	I0522 18:55:35.261468  191271 command_runner.go:130] > Restart=on-failure
	I0522 18:55:35.261483  191271 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:55:35.261500  191271 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:55:35.261516  191271 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:55:35.261524  191271 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:55:35.261534  191271 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:55:35.261547  191271 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:55:35.261557  191271 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:55:35.261576  191271 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:55:35.261588  191271 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:55:35.261594  191271 command_runner.go:130] > ExecStart=
	I0522 18:55:35.261621  191271 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:55:35.261631  191271 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:55:35.261646  191271 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:55:35.261659  191271 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:55:35.261669  191271 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:55:35.261675  191271 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:55:35.261684  191271 command_runner.go:130] > LimitCORE=infinity
	I0522 18:55:35.261693  191271 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:55:35.261703  191271 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:55:35.261710  191271 command_runner.go:130] > TasksMax=infinity
	I0522 18:55:35.261720  191271 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:55:35.261728  191271 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:55:35.261736  191271 command_runner.go:130] > Delegate=yes
	I0522 18:55:35.261744  191271 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:55:35.261754  191271 command_runner.go:130] > KillMode=process
	I0522 18:55:35.261765  191271 command_runner.go:130] > [Install]
	I0522 18:55:35.261772  191271 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:55:35.262253  191271 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:55:35.262328  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:55:35.272378  191271 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:55:35.286942  191271 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:55:35.287999  191271 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:55:35.290999  191271 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:55:35.291145  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:55:35.298279  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:55:35.315216  191271 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:55:35.446839  191271 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:55:35.548330  191271 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:55:35.548469  191271 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:55:35.564761  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:35.639152  191271 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:55:35.897209  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:55:35.908119  191271 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:55:35.918345  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:35.927683  191271 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:55:36.004999  191271 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:55:36.078400  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.150568  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:55:36.162061  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:55:36.171038  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.243030  191271 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:55:36.303786  191271 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:55:36.303856  191271 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:55:36.307863  191271 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:55:36.307890  191271 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:55:36.307896  191271 command_runner.go:130] > Device: 41h/65d	Inode: 218         Links: 1
	I0522 18:55:36.307903  191271 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:55:36.307908  191271 command_runner.go:130] > Access: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307913  191271 command_runner.go:130] > Modify: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307919  191271 command_runner.go:130] > Change: 2024-05-22 18:55:36.251127457 +0000
	I0522 18:55:36.307922  191271 command_runner.go:130] >  Birth: -
	I0522 18:55:36.307945  191271 start.go:562] Will wait 60s for crictl version
	I0522 18:55:36.307977  191271 ssh_runner.go:195] Run: which crictl
	I0522 18:55:36.310791  191271 command_runner.go:130] > /usr/bin/crictl
	I0522 18:55:36.310921  191271 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:55:36.342474  191271 command_runner.go:130] > Version:  0.1.0
	I0522 18:55:36.342498  191271 command_runner.go:130] > RuntimeName:  docker
	I0522 18:55:36.342505  191271 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:55:36.342511  191271 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:55:36.342526  191271 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:55:36.342561  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.363987  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.365226  191271 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:55:36.387207  191271 command_runner.go:130] > 26.1.2
	I0522 18:55:36.389505  191271 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:55:36.389579  191271 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:55:36.405602  191271 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:55:36.408842  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.418521  191271 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:55:36.418633  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:36.418681  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.434338  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.434356  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.434360  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.434365  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.434370  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.434376  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.434385  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.434392  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.434401  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.434411  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.435375  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.435391  191271 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:55:36.435443  191271 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:55:36.451482  191271 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:55:36.451502  191271 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:55:36.451508  191271 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:55:36.451513  191271 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:55:36.451518  191271 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:55:36.451523  191271 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:55:36.451536  191271 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:55:36.451540  191271 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:55:36.451545  191271 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:36.451553  191271 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:55:36.452593  191271 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:55:36.452609  191271 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:55:36.452620  191271 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:55:36.452743  191271 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:55:36.452799  191271 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:55:36.491841  191271 command_runner.go:130] > cgroupfs
	I0522 18:55:36.493137  191271 cni.go:84] Creating CNI manager for ""
	I0522 18:55:36.493150  191271 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:55:36.493167  191271 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:55:36.493191  191271 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:55:36.493314  191271 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:55:36.493364  191271 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:55:36.500368  191271 command_runner.go:130] > kubeadm
	I0522 18:55:36.500385  191271 command_runner.go:130] > kubectl
	I0522 18:55:36.500390  191271 command_runner.go:130] > kubelet
	I0522 18:55:36.501014  191271 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:55:36.501074  191271 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:55:36.508385  191271 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:55:36.523332  191271 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:55:36.537874  191271 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:55:36.552595  191271 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:55:36.555448  191271 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:55:36.564451  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:36.642902  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:36.654630  191271 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:55:36.654650  191271 certs.go:194] generating shared ca certs ...
	I0522 18:55:36.654663  191271 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:36.654795  191271 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:55:36.654860  191271 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:55:36.654873  191271 certs.go:256] generating profile certs ...
	I0522 18:55:36.654970  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:55:36.655041  191271 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:55:36.655092  191271 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:55:36.655106  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:55:36.655127  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:55:36.655145  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:55:36.655158  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:55:36.655171  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:55:36.655182  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:55:36.655196  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:55:36.655210  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:55:36.655259  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:55:36.655305  191271 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:55:36.655318  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:55:36.655347  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:55:36.655380  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:55:36.655406  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:55:36.655457  191271 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:55:36.655490  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:55:36.655509  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:55:36.655527  191271 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:36.656072  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:55:36.677388  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:55:36.698564  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:55:36.746137  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:55:36.774335  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:55:36.844940  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:55:36.867576  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:55:36.892332  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:55:36.915359  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:55:36.935989  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:55:36.956836  191271 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:55:36.978204  191271 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:55:36.992907  191271 ssh_runner.go:195] Run: openssl version
	I0522 18:55:36.997686  191271 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:55:36.997748  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:55:37.005519  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008401  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008425  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.008462  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:55:37.014161  191271 command_runner.go:130] > 51391683
	I0522 18:55:37.014217  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:55:37.021650  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:55:37.029393  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032351  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032375  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.032410  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:55:37.037998  191271 command_runner.go:130] > 3ec20f2e
	I0522 18:55:37.038254  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:55:37.045680  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:55:37.053800  191271 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056711  191271 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056742  191271 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.056791  191271 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:55:37.062340  191271 command_runner.go:130] > b5213941
	I0522 18:55:37.062547  191271 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:55:37.069967  191271 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072857  191271 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:55:37.072876  191271 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0522 18:55:37.072882  191271 command_runner.go:130] > Device: 801h/2049d	Inode: 1307017     Links: 1
	I0522 18:55:37.072888  191271 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:55:37.072894  191271 command_runner.go:130] > Access: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072899  191271 command_runner.go:130] > Modify: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072903  191271 command_runner.go:130] > Change: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072911  191271 command_runner.go:130] >  Birth: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:55:37.072945  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:55:37.078522  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.078755  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:55:37.084341  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.084578  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:55:37.090035  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.090259  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:55:37.095704  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.095756  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:55:37.101044  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.101094  191271 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:55:37.106347  191271 command_runner.go:130] > Certificate will not expire
	I0522 18:55:37.106403  191271 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:55:37.106497  191271 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:55:37.124843  191271 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:55:37.132393  191271 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0522 18:55:37.132411  191271 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0522 18:55:37.132419  191271 command_runner.go:130] > /var/lib/minikube/etcd:
	I0522 18:55:37.132424  191271 command_runner.go:130] > member
	W0522 18:55:37.132447  191271 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:55:37.132459  191271 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:55:37.132465  191271 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:55:37.132505  191271 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:55:37.139565  191271 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:55:37.139949  191271 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-737786" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140068  191271 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-737786" cluster setting kubeconfig missing "multinode-737786" context setting]
	I0522 18:55:37.140319  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.140688  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.140913  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.141318  191271 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:55:37.141459  191271 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:55:37.148863  191271 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.67.2
	I0522 18:55:37.148895  191271 kubeadm.go:591] duration metric: took 16.425758ms to restartPrimaryControlPlane
	I0522 18:55:37.148904  191271 kubeadm.go:393] duration metric: took 42.505287ms to StartCluster
	I0522 18:55:37.148931  191271 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.148985  191271 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.149459  191271 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:55:37.149654  191271 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:55:37.152713  191271 out.go:177] * Verifying Kubernetes components...
	I0522 18:55:37.149721  191271 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:55:37.149877  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:37.153954  191271 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:55:37.153961  191271 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:55:37.153992  191271 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:55:37.153957  191271 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:55:37.154051  191271 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	W0522 18:55:37.154065  191271 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:55:37.154096  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.154247  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.154486  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.171776  191271 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:55:37.172020  191271 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:55:37.173669  191271 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:55:37.172259  191271 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	W0522 18:55:37.173707  191271 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:55:37.173740  191271 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:55:37.174905  191271 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.174926  191271 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:55:37.174967  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.174090  191271 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:55:37.190845  191271 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.190870  191271 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:55:37.190937  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:55:37.197226  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.210979  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32927 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:55:37.239056  191271 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:55:37.249298  191271 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:37.249409  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.249419  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.249426  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.249430  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.249651  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.249672  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.292541  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.309037  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.371074  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.371130  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.371176  191271 retry.go:31] will retry after 264.181237ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460775  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.460825  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.460847  191271 retry.go:31] will retry after 133.777268ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.595213  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:37.635676  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:37.749887  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:37.749959  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:37.749982  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:37.749999  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:37.750293  191271 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:55:37.750342  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:37.844082  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.844160  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.844205  191271 retry.go:31] will retry after 478.031663ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.853584  191271 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:55:37.857211  191271 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:37.857246  191271 retry.go:31] will retry after 515.22721ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:55:38.249559  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:38.249587  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:38.249598  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:38.249602  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:38.323157  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:55:38.373635  191271 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:55:39.946432  191271 round_trippers.go:574] Response Status: 200 OK in 1696 milliseconds
	I0522 18:55:39.946464  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.946474  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:39.946478  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.946482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.946485  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.946489  191271 round_trippers.go:580]     Audit-Id: 25c542b6-5d69-4e1f-b457-019f46d0b3c3
	I0522 18:55:39.946493  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.947402  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:39.948394  191271 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:55:39.948474  191271 node_ready.go:38] duration metric: took 2.699146059s for node "multinode-737786" to be "Ready" ...
	I0522 18:55:39.948494  191271 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:39.948584  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:39.948597  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:39.948606  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:39.948613  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:39.963427  191271 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0522 18:55:39.963451  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:39.963460  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:39.963465  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:39.963470  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:39 GMT
	I0522 18:55:39.963473  191271 round_trippers.go:580]     Audit-Id: 29cd26ef-7452-4010-9449-59e360709035
	I0522 18:55:39.963477  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:39.963481  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.048655  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1526"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 57633 chars]
	I0522 18:55:40.053660  191271 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.053824  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:55:40.053850  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.053870  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.053883  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.059346  191271 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0522 18:55:40.059421  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.059441  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.059454  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.059469  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:55:40.059497  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:55:40.059516  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.059527  191271 round_trippers.go:580]     Audit-Id: 93431c2f-ec1f-4fd8-800a-aecc0626a610
	I0522 18:55:40.059734  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"427","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0522 18:55:40.060346  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.060403  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.060422  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.060435  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.146472  191271 round_trippers.go:574] Response Status: 200 OK in 86 milliseconds
	I0522 18:55:40.146560  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.146586  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.146596  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.146600  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.146604  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.146608  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.146631  191271 round_trippers.go:580]     Audit-Id: c46997f5-8ce7-490a-b32e-d6ef84d46be8
	I0522 18:55:40.146761  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.147186  191271 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.147238  191271 pod_ready.go:81] duration metric: took 93.497189ms for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147262  191271 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.147415  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:55:40.147441  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.147460  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.147477  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.150508  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.150572  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.150588  191271 round_trippers.go:580]     Audit-Id: f365bd66-16d5-494e-bd03-4c158b4f19e1
	I0522 18:55:40.150601  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.150627  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.150631  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.150634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.150638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.150819  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"289","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-cl
ient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config. [truncated 6148 chars]
	I0522 18:55:40.151364  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.151381  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.151391  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.151399  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.152784  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.152801  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.152811  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.152818  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.152831  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.152835  191271 round_trippers.go:580]     Audit-Id: d3e15c5e-fb13-4bdb-9f2b-e5251d5bd358
	I0522 18:55:40.152845  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.152850  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.152966  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.153383  191271 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.153406  191271 pod_ready.go:81] duration metric: took 6.080227ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153421  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.153519  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:55:40.153530  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.153540  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.153545  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.155159  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.155179  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.155188  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.155205  191271 round_trippers.go:580]     Audit-Id: bf655a4b-43df-4c1b-8ffa-6e7ba1c46ee2
	I0522 18:55:40.155210  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.155215  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.155228  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.155231  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.155475  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"286","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kube
rnetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes.i [truncated 8532 chars]
	I0522 18:55:40.156172  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.156186  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.156195  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.156200  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.157607  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.157621  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.157629  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.157634  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.157638  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.157643  191271 round_trippers.go:580]     Audit-Id: a4d9b4c1-ce60-4797-85b7-8f19f338b51d
	I0522 18:55:40.157646  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.157650  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.158173  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1369","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5178 chars]
	I0522 18:55:40.158539  191271 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.158551  191271 pod_ready.go:81] duration metric: took 5.118553ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158561  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.158613  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:55:40.158618  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.158628  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.158634  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.162612  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:40.162628  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.162637  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.162641  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.162647  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.162652  191271 round_trippers.go:580]     Audit-Id: 4541150f-2cd8-4ffe-962b-1e97b5fbf351
	I0522 18:55:40.162666  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.162671  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.163141  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"292","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 8107 chars]
	I0522 18:55:40.163704  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.163735  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.163746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.163769  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.167888  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:40.167909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.167918  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.167924  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.167928  191271 round_trippers.go:580]     Audit-Id: c457d921-e201-4732-9893-b1385b6f1926
	I0522 18:55:40.167950  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.167961  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.167965  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.168254  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.168604  191271 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.168619  191271 pod_ready.go:81] duration metric: took 10.05025ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168630  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.168682  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:55:40.168687  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.168696  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.168746  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171083  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.171096  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.171102  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.171106  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.171108  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.171111  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.171115  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.171118  191271 round_trippers.go:580]     Audit-Id: 60a65c08-d5cd-4e57-814c-1732c8213de5
	I0522 18:55:40.171338  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"372","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5823 chars]
	I0522 18:55:40.171744  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.171761  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.171778  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.171785  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.172894  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.172909  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.172917  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.172923  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.172941  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.172949  191271 round_trippers.go:580]     Audit-Id: 4a8b876d-cd28-40b0-8c7f-d0d3dcdf9a8a
	I0522 18:55:40.172954  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.172960  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.173055  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.173292  191271 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.173304  191271 pod_ready.go:81] duration metric: took 4.667435ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.173312  191271 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.250373  191271 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0522 18:55:40.253545  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.930348316s)
	I0522 18:55:40.253673  191271 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:55:40.253686  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.253693  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.253697  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.255762  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.255783  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.255791  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.255797  191271 round_trippers.go:580]     Content-Length: 1274
	I0522 18:55:40.255802  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.255806  191271 round_trippers.go:580]     Audit-Id: 29bedd89-fc34-4a6e-af90-ad42da35c8fd
	I0522 18:55:40.255818  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.255822  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.255827  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.255874  191271 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1531"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0522 18:55:40.256474  191271 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.256539  191271 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:55:40.256552  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.256570  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.256575  191271 round_trippers.go:473]     Content-Type: application/json
	I0522 18:55:40.256579  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.259420  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.259441  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.259451  191271 round_trippers.go:580]     Audit-Id: 82c4354f-9c05-40ba-a5a3-b7a52e45d257
	I0522 18:55:40.259456  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.259460  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.259463  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.259466  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.259468  191271 round_trippers.go:580]     Content-Length: 1220
	I0522 18:55:40.259471  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.259529  191271 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:55:40.349664  191271 request.go:629] Waited for 176.30464ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349765  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:55:40.349773  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.349781  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.349789  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.351637  191271 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:55:40.351668  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.351677  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.351681  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.351685  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.351689  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.351694  191271 round_trippers.go:580]     Audit-Id: e542db4a-5526-4b20-9370-c944caf3811a
	I0522 18:55:40.351699  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.351857  191271 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"294","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4989 chars]
	I0522 18:55:40.548957  191271 request.go:629] Waited for 196.618236ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549041  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:55:40.549047  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:40.549058  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:40.549067  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:40.551086  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:40.551107  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:40.551116  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:40.551122  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:40.551129  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:40.551133  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:40.551139  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:40 GMT
	I0522 18:55:40.551143  191271 round_trippers.go:580]     Audit-Id: 1e1c2f19-5fcc-4420-a18e-31b43efa6830
	I0522 18:55:40.551354  191271 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:55:40.551644  191271 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:55:40.551661  191271 pod_ready.go:81] duration metric: took 378.342194ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:55:40.551670  191271 pod_ready.go:38] duration metric: took 603.166138ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:55:40.551692  191271 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:55:40.551735  191271 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:55:40.607738  191271 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0522 18:55:40.607762  191271 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0522 18:55:40.607769  191271 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607776  191271 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:55:40.607780  191271 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0522 18:55:40.607785  191271 command_runner.go:130] > pod/storage-provisioner configured
	I0522 18:55:40.607801  191271 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.23414082s)
	I0522 18:55:40.607851  191271 command_runner.go:130] > 1914
	I0522 18:55:40.607887  191271 api_server.go:72] duration metric: took 3.45820997s to wait for apiserver process to appear ...
	I0522 18:55:40.610759  191271 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:55:40.607897  191271 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:55:40.611942  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:40.611952  191271 addons.go:505] duration metric: took 3.462227154s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:55:40.615330  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:55:40.615348  191271 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:55:41.112944  191271 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:55:41.117073  191271 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:55:41.117157  191271 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:55:41.117169  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.117179  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.117183  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.118187  191271 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:55:41.118209  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.118219  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.118225  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.118233  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.118238  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.118246  191271 round_trippers.go:580]     Content-Length: 263
	I0522 18:55:41.118249  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.118253  191271 round_trippers.go:580]     Audit-Id: 3e08a892-73f4-4e1c-b61f-1d2036a1b85f
	I0522 18:55:41.118286  191271 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:55:41.118396  191271 api_server.go:141] control plane version: v1.30.1
	I0522 18:55:41.118421  191271 api_server.go:131] duration metric: took 506.49152ms to wait for apiserver health ...
	I0522 18:55:41.118429  191271 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:55:41.118489  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.118499  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.118508  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.118517  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.121750  191271 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:55:41.121770  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.121780  191271 round_trippers.go:580]     Audit-Id: d8ba21dd-cd96-4231-9557-114d06d5b330
	I0522 18:55:41.121787  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.121802  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.121807  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.121824  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.121832  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.122518  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.124999  191271 system_pods.go:59] 8 kube-system pods found
	I0522 18:55:41.125049  191271 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.125064  191271 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.125078  191271 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.125095  191271 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.125108  191271 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.125123  191271 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.125135  191271 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.125143  191271 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.125153  191271 system_pods.go:74] duration metric: took 6.71923ms to wait for pod list to return data ...
	I0522 18:55:41.125171  191271 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:55:41.125259  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:55:41.125270  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.125279  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.125284  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.127424  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.127447  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.127456  191271 round_trippers.go:580]     Audit-Id: a747672b-a20b-4af5-ade4-ea4b67829eed
	I0522 18:55:41.127461  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.127465  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.127482  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.127491  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.127494  191271 round_trippers.go:580]     Content-Length: 262
	I0522 18:55:41.127497  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.127525  191271 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:55:41.127713  191271 default_sa.go:45] found service account: "default"
	I0522 18:55:41.127733  191271 default_sa.go:55] duration metric: took 2.553683ms for default service account to be created ...
	I0522 18:55:41.127742  191271 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:55:41.149070  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:55:41.149091  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.149101  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.149106  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.153123  191271 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:55:41.153141  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.153148  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.153151  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.153153  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.153156  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.153158  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.153161  191271 round_trippers.go:580]     Audit-Id: 2b84be8b-87e2-4071-b787-4728703fa23e
	I0522 18:55:41.154286  191271 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1537","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60702 chars]
	I0522 18:55:41.157071  191271 system_pods.go:86] 8 kube-system pods found
	I0522 18:55:41.157102  191271 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:55:41.157114  191271 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:55:41.157125  191271 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:55:41.157143  191271 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:55:41.157161  191271 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:55:41.157175  191271 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:55:41.157188  191271 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:55:41.157216  191271 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0522 18:55:41.157231  191271 system_pods.go:126] duration metric: took 29.478851ms to wait for k8s-apps to be running ...
	I0522 18:55:41.157247  191271 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:55:41.157295  191271 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:55:41.169086  191271 system_svc.go:56] duration metric: took 11.831211ms WaitForService to wait for kubelet
	I0522 18:55:41.169113  191271 kubeadm.go:576] duration metric: took 4.019434744s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:55:41.169134  191271 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:55:41.349440  191271 request.go:629] Waited for 180.210127ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349507  191271 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:55:41.349515  191271 round_trippers.go:469] Request Headers:
	I0522 18:55:41.349525  191271 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:55:41.349532  191271 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:55:41.352161  191271 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:55:41.352182  191271 round_trippers.go:577] Response Headers:
	I0522 18:55:41.352190  191271 round_trippers.go:580]     Audit-Id: 7838a856-6baa-4e95-bfcc-54203cf8503d
	I0522 18:55:41.352195  191271 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:55:41.352201  191271 round_trippers.go:580]     Content-Type: application/json
	I0522 18:55:41.352206  191271 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:55:41.352219  191271 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:55:41.352223  191271 round_trippers.go:580]     Date: Wed, 22 May 2024 18:55:41 GMT
	I0522 18:55:41.352341  191271 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1541"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 5264 chars]
	I0522 18:55:41.352807  191271 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:55:41.352842  191271 node_conditions.go:123] node cpu capacity is 8
	I0522 18:55:41.352854  191271 node_conditions.go:105] duration metric: took 183.714016ms to run NodePressure ...
	I0522 18:55:41.352869  191271 start.go:240] waiting for startup goroutines ...
	I0522 18:55:41.352879  191271 start.go:245] waiting for cluster config update ...
	I0522 18:55:41.352892  191271 start.go:254] writing updated cluster config ...
	I0522 18:55:41.354996  191271 out.go:177] 
	I0522 18:55:41.356517  191271 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:55:41.356594  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.358237  191271 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:55:41.359528  191271 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:55:41.360862  191271 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:55:41.362023  191271 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:55:41.362046  191271 cache.go:56] Caching tarball of preloaded images
	I0522 18:55:41.362122  191271 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:55:41.362131  191271 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:55:41.362129  191271 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:55:41.362226  191271 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:55:41.379872  191271 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:55:41.379903  191271 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:55:41.379925  191271 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:55:41.379963  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:41.380037  191271 start.go:364] duration metric: took 46.895µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:41.380065  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:41.380079  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:41.380381  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.396179  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Stopped err=<nil>
	W0522 18:55:41.396216  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:41.398486  191271 out.go:177] * Restarting existing docker container for "multinode-737786-m02" ...
	I0522 18:55:41.399852  191271 cli_runner.go:164] Run: docker start multinode-737786-m02
	I0522 18:55:41.739232  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:41.759108  191271 kic.go:430] container "multinode-737786-m02" state is running.
	I0522 18:55:41.759532  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:41.781733  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:55:41.781802  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:41.800873  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	W0522 18:55:41.801752  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.801799  191271 retry.go:31] will retry after 178.387586ms: ssh: handshake failed: read tcp 127.0.0.1:59104->127.0.0.1:32932: read: connection reset by peer
	W0522 18:55:41.981559  191271 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:41.981589  191271 retry.go:31] will retry after 356.566239ms: ssh: handshake failed: read tcp 127.0.0.1:59106->127.0.0.1:32932: read: connection reset by peer
	I0522 18:55:42.427258  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:55:42.427545  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:55:42.431134  191271 command_runner.go:130] > 213G
	I0522 18:55:42.431367  191271 fix.go:56] duration metric: took 1.051284151s for fixHost
	I0522 18:55:42.431388  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1.051331877s
	W0522 18:55:42.431406  191271 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:55:42.431491  191271 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:55:42.431503  191271 start.go:728] Will try again in 5 seconds ...
	I0522 18:55:47.432624  191271 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:55:47.432726  191271 start.go:364] duration metric: took 68.501µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:55:47.432755  191271 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:55:47.432767  191271 fix.go:54] fixHost starting: m02
	I0522 18:55:47.433049  191271 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:55:47.449030  191271 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Running err=<nil>
	W0522 18:55:47.449055  191271 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:55:47.451390  191271 out.go:177] * Updating the running docker "multinode-737786-m02" container ...
	I0522 18:55:47.452545  191271 machine.go:94] provisionDockerMachine start ...
	I0522 18:55:47.452614  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.468745  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.468930  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.468943  191271 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:55:47.578455  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.578487  191271 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:55:47.578548  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.595125  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.595343  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.595360  191271 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:55:47.721219  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:55:47.721292  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:55:47.737411  191271 main.go:141] libmachine: Using SSH client type: native
	I0522 18:55:47.737578  191271 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32932 <nil> <nil>}
	I0522 18:55:47.737594  191271 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:55:47.850975  191271 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:55:47.851002  191271 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:55:47.851027  191271 ubuntu.go:177] setting up certificates
	I0522 18:55:47.851040  191271 provision.go:84] configureAuth start
	I0522 18:55:47.851098  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.865910  191271 provision.go:87] duration metric: took 14.860061ms to configureAuth
	W0522 18:55:47.865931  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.865957  191271 retry.go:31] will retry after 87.876µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.867083  191271 provision.go:84] configureAuth start
	I0522 18:55:47.867151  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.883908  191271 provision.go:87] duration metric: took 16.806772ms to configureAuth
	W0522 18:55:47.883927  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.883942  191271 retry.go:31] will retry after 102.785µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.885049  191271 provision.go:84] configureAuth start
	I0522 18:55:47.885127  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.899850  191271 provision.go:87] duration metric: took 14.775266ms to configureAuth
	W0522 18:55:47.899866  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.899883  191271 retry.go:31] will retry after 127.962µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.900992  191271 provision.go:84] configureAuth start
	I0522 18:55:47.901044  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.915918  191271 provision.go:87] duration metric: took 14.910204ms to configureAuth
	W0522 18:55:47.915936  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.915950  191271 retry.go:31] will retry after 176.177µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.917057  191271 provision.go:84] configureAuth start
	I0522 18:55:47.917110  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.933132  191271 provision.go:87] duration metric: took 16.057912ms to configureAuth
	W0522 18:55:47.933147  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.933162  191271 retry.go:31] will retry after 415.738µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.934277  191271 provision.go:84] configureAuth start
	I0522 18:55:47.934340  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.949561  191271 provision.go:87] duration metric: took 15.2663ms to configureAuth
	W0522 18:55:47.949578  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.949593  191271 retry.go:31] will retry after 695.271µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.950702  191271 provision.go:84] configureAuth start
	I0522 18:55:47.950753  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.965237  191271 provision.go:87] duration metric: took 14.518838ms to configureAuth
	W0522 18:55:47.965256  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.965273  191271 retry.go:31] will retry after 624.889µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.966387  191271 provision.go:84] configureAuth start
	I0522 18:55:47.966449  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.981238  191271 provision.go:87] duration metric: took 14.830065ms to configureAuth
	W0522 18:55:47.981257  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.981273  191271 retry.go:31] will retry after 1.057459ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.982393  191271 provision.go:84] configureAuth start
	I0522 18:55:47.982466  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:47.998674  191271 provision.go:87] duration metric: took 16.255395ms to configureAuth
	W0522 18:55:47.998692  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:47.998712  191271 retry.go:31] will retry after 2.801269ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.001909  191271 provision.go:84] configureAuth start
	I0522 18:55:48.001983  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.017417  191271 provision.go:87] duration metric: took 15.487122ms to configureAuth
	W0522 18:55:48.017438  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.017457  191271 retry.go:31] will retry after 2.6692ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.020641  191271 provision.go:84] configureAuth start
	I0522 18:55:48.020707  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.035890  191271 provision.go:87] duration metric: took 15.231178ms to configureAuth
	W0522 18:55:48.035907  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.035925  191271 retry.go:31] will retry after 4.913205ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.041121  191271 provision.go:84] configureAuth start
	I0522 18:55:48.041190  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.056341  191271 provision.go:87] duration metric: took 15.201859ms to configureAuth
	W0522 18:55:48.056358  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.056374  191271 retry.go:31] will retry after 8.73344ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.065553  191271 provision.go:84] configureAuth start
	I0522 18:55:48.065620  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.080469  191271 provision.go:87] duration metric: took 14.898331ms to configureAuth
	W0522 18:55:48.080489  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.080506  191271 retry.go:31] will retry after 13.355259ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.094679  191271 provision.go:84] configureAuth start
	I0522 18:55:48.094748  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.109923  191271 provision.go:87] duration metric: took 15.225024ms to configureAuth
	W0522 18:55:48.109942  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.109959  191271 retry.go:31] will retry after 17.591086ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.128159  191271 provision.go:84] configureAuth start
	I0522 18:55:48.128244  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.143258  191271 provision.go:87] duration metric: took 15.081459ms to configureAuth
	W0522 18:55:48.143309  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.143328  191271 retry.go:31] will retry after 30.694182ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.174523  191271 provision.go:84] configureAuth start
	I0522 18:55:48.174643  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.190339  191271 provision.go:87] duration metric: took 15.791254ms to configureAuth
	W0522 18:55:48.190355  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.190371  191271 retry.go:31] will retry after 60.478865ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.251580  191271 provision.go:84] configureAuth start
	I0522 18:55:48.251680  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.267446  191271 provision.go:87] duration metric: took 15.839853ms to configureAuth
	W0522 18:55:48.267466  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.267484  191271 retry.go:31] will retry after 63.884927ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.331706  191271 provision.go:84] configureAuth start
	I0522 18:55:48.331794  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.347085  191271 provision.go:87] duration metric: took 15.328539ms to configureAuth
	W0522 18:55:48.347105  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.347122  191271 retry.go:31] will retry after 87.655661ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.435332  191271 provision.go:84] configureAuth start
	I0522 18:55:48.435425  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.451751  191271 provision.go:87] duration metric: took 16.388799ms to configureAuth
	W0522 18:55:48.451774  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.451793  191271 retry.go:31] will retry after 195.353755ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.648137  191271 provision.go:84] configureAuth start
	I0522 18:55:48.648216  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.663505  191271 provision.go:87] duration metric: took 15.339444ms to configureAuth
	W0522 18:55:48.663523  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.663539  191271 retry.go:31] will retry after 289.097561ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.952931  191271 provision.go:84] configureAuth start
	I0522 18:55:48.953045  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:48.968997  191271 provision.go:87] duration metric: took 16.035059ms to configureAuth
	W0522 18:55:48.969019  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:48.969037  191271 retry.go:31] will retry after 186.761832ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.156383  191271 provision.go:84] configureAuth start
	I0522 18:55:49.156459  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.173159  191271 provision.go:87] duration metric: took 16.748544ms to configureAuth
	W0522 18:55:49.173181  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.173199  191271 retry.go:31] will retry after 327.938905ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.501699  191271 provision.go:84] configureAuth start
	I0522 18:55:49.501785  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:49.517950  191271 provision.go:87] duration metric: took 16.220449ms to configureAuth
	W0522 18:55:49.517970  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:49.517987  191271 retry.go:31] will retry after 817.802375ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.336261  191271 provision.go:84] configureAuth start
	I0522 18:55:50.336358  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:50.352199  191271 provision.go:87] duration metric: took 15.908402ms to configureAuth
	W0522 18:55:50.352217  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:50.352235  191271 retry.go:31] will retry after 975.249665ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.327901  191271 provision.go:84] configureAuth start
	I0522 18:55:51.327997  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:51.343571  191271 provision.go:87] duration metric: took 15.641557ms to configureAuth
	W0522 18:55:51.343589  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:51.343604  191271 retry.go:31] will retry after 1.511582383s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.855327  191271 provision.go:84] configureAuth start
	I0522 18:55:52.855421  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:52.874130  191271 provision.go:87] duration metric: took 18.776068ms to configureAuth
	W0522 18:55:52.874152  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:52.874173  191271 retry.go:31] will retry after 2.587827778s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.462838  191271 provision.go:84] configureAuth start
	I0522 18:55:55.462920  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:55:55.479954  191271 provision.go:87] duration metric: took 17.080473ms to configureAuth
	W0522 18:55:55.479973  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:55:55.479992  191271 retry.go:31] will retry after 4.788436213s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.268555  191271 provision.go:84] configureAuth start
	I0522 18:56:00.268664  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:00.284768  191271 provision.go:87] duration metric: took 16.187921ms to configureAuth
	W0522 18:56:00.284787  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:00.284804  191271 retry.go:31] will retry after 4.16940433s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.458082  191271 provision.go:84] configureAuth start
	I0522 18:56:04.458158  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:04.474138  191271 provision.go:87] duration metric: took 16.031529ms to configureAuth
	W0522 18:56:04.474155  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:04.474171  191271 retry.go:31] will retry after 11.936949428s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.411971  191271 provision.go:84] configureAuth start
	I0522 18:56:16.412062  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:16.427556  191271 provision.go:87] duration metric: took 15.558638ms to configureAuth
	W0522 18:56:16.427574  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:16.427592  191271 retry.go:31] will retry after 9.484561192s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.912297  191271 provision.go:84] configureAuth start
	I0522 18:56:25.912384  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:25.927852  191271 provision.go:87] duration metric: took 15.527116ms to configureAuth
	W0522 18:56:25.927874  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:25.927894  191271 retry.go:31] will retry after 27.958237861s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.888233  191271 provision.go:84] configureAuth start
	I0522 18:56:53.888316  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:56:53.906509  191271 provision.go:87] duration metric: took 18.250582ms to configureAuth
	W0522 18:56:53.906529  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:56:53.906545  191271 retry.go:31] will retry after 38.774225348s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.682746  191271 provision.go:84] configureAuth start
	I0522 18:57:32.682888  191271 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:57:32.700100  191271 provision.go:87] duration metric: took 17.312123ms to configureAuth
	W0522 18:57:32.700120  191271 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700141  191271 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.700149  191271 machine.go:97] duration metric: took 1m45.247591588s to provisionDockerMachine
	I0522 18:57:32.700204  191271 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:57:32.700240  191271 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:57:32.716059  191271 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32932 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 18:57:32.795615  191271 command_runner.go:130] > 27%!
	(MISSING)I0522 18:57:32.795930  191271 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:57:32.799643  191271 command_runner.go:130] > 213G
	I0522 18:57:32.799841  191271 fix.go:56] duration metric: took 1m45.367071968s for fixHost
	I0522 18:57:32.799861  191271 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m45.367119086s
	W0522 18:57:32.799939  191271 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:57:32.801845  191271 out.go:177] 
	W0522 18:57:32.802985  191271 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 18:57:32.802997  191271 out.go:239] * 
	W0522 18:57:32.803803  191271 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 18:57:32.805200  191271 out.go:177] 
	
	
	==> Docker <==
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7b6f81208c49be20c2ce466f1d45caff3944731d4d6d47de580685eab70a7397/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:37 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:37Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fd5e5467e43217c5e999d05af37ed4a9d45b01e53e6f10773150099d220720d7/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:40 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fb1d360112edd5f1fefe695c76c60c4bcb6ff37c4ff1d3557141f077bc1d13ec/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6f7044f4a3341c31c26a26c9a54148b5edf783501f39de034de125ea0756da88/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:55:41 multinode-737786 cri-dockerd[1188]: time="2024-05-22T18:55:41Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a6b2b3d758240c7c593442266ca02c7d49dce426e0b92147a72b5a13d59d90d0/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:56:11 multinode-737786 dockerd[960]: time="2024-05-22T18:56:11.560442726Z" level=info msg="ignoring event" container=11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:33 multinode-737786 dockerd[960]: 2024/05/22 18:57:33 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:34 multinode-737786 dockerd[960]: 2024/05/22 18:57:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:34 multinode-737786 dockerd[960]: 2024/05/22 18:57:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:34 multinode-737786 dockerd[960]: 2024/05/22 18:57:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 22 18:57:34 multinode-737786 dockerd[960]: 2024/05/22 18:57:34 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	2775772a4970a       6e38f40d628db                                                                                         2 minutes ago       Running             storage-provisioner       2                   6f7044f4a3341       storage-provisioner
	513df62eec3d7       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   2                   635f4e9d5f8f1       coredns-7db6d8ff4d-jhsz9
	ca4e4fb6fa63f       8c811b4aec35f                                                                                         3 minutes ago       Running             busybox                   1                   a6b52bbcc47a8       busybox-fc5497c4f-7zbr8
	43dd6bc557dd6       ac1c61439df46                                                                                         3 minutes ago       Running             kindnet-cni               1                   a6b2b3d758240       kindnet-qpfbl
	11bb4599579bf       6e38f40d628db                                                                                         3 minutes ago       Exited              storage-provisioner       1                   6f7044f4a3341       storage-provisioner
	9e66337e0a3b0       747097150317f                                                                                         3 minutes ago       Running             kube-proxy                1                   fb1d360112edd       kube-proxy-kqtgj
	f57ae12003854       25a1387cdab82                                                                                         3 minutes ago       Running             kube-controller-manager   1                   fd5e5467e4321       kube-controller-manager-multinode-737786
	495d862fbc889       91be940803172                                                                                         3 minutes ago       Running             kube-apiserver            1                   7b6f81208c49b       kube-apiserver-multinode-737786
	94cf43c9c1855       a52dc94f0a912                                                                                         3 minutes ago       Running             kube-scheduler            1                   74a359ee9dc76       kube-scheduler-multinode-737786
	eefaf11c384e1       3861cfcd7c04c                                                                                         3 minutes ago       Running             etcd                      1                   2558846c3bbbb       etcd-multinode-737786
	2e5611854b2b6       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   22 minutes ago      Exited              busybox                   0                   7fefb8ab9046a       busybox-fc5497c4f-7zbr8
	14ca8a91c3a85       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   1                   ada6e7b25c533       coredns-7db6d8ff4d-jhsz9
	80553b93f7ea9       kindest/kindnetd@sha256:9c2b5fcda3cb5a9725ecb893f3c8998a92d51a87465a886eb563e18d649383a8              26 minutes ago      Exited              kindnet-cni               0                   aa62dfdeffd06       kindnet-qpfbl
	4394527287d9e       747097150317f                                                                                         26 minutes ago      Exited              kube-proxy                0                   6eb49817ae60f       kube-proxy-kqtgj
	6991b35c68003       91be940803172                                                                                         26 minutes ago      Exited              kube-apiserver            0                   df50647100140       kube-apiserver-multinode-737786
	5f53f367ebd9c       3861cfcd7c04c                                                                                         26 minutes ago      Exited              etcd                      0                   1d92837fd4e76       etcd-multinode-737786
	06715a769ee48       25a1387cdab82                                                                                         26 minutes ago      Exited              kube-controller-manager   0                   4f2b347dd216a       kube-controller-manager-multinode-737786
	967d2411643e1       a52dc94f0a912                                                                                         26 minutes ago      Exited              kube-scheduler            0                   65627abb36122       kube-scheduler-multinode-737786
	
	
	==> coredns [14ca8a91c3a8] <==
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:52125 - 53972 "HINFO IN 1463268253594413494.9010003919506232664. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015298164s
	[INFO] 10.244.0.3:48378 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000238684s
	[INFO] 10.244.0.3:59221 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 31 0.013090305s
	[INFO] 10.244.0.3:42881 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.000740933s
	[INFO] 10.244.0.3:51488 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.022252255s
	[INFO] 10.244.0.3:57389 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143058s
	[INFO] 10.244.0.3:48854 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.005255577s
	[INFO] 10.244.0.3:37749 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000129992s
	[INFO] 10.244.0.3:49159 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143259s
	[INFO] 10.244.0.3:33267 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003880164s
	[INFO] 10.244.0.3:55644 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000123464s
	[INFO] 10.244.0.3:40518 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000115443s
	[INFO] 10.244.0.3:44250 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088045s
	[INFO] 10.244.0.3:34342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000102385s
	[INFO] 10.244.0.3:58734 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104426s
	[INFO] 10.244.0.3:33373 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000089833s
	[INFO] 10.244.0.3:46218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000084391s
	[INFO] 10.244.0.3:58305 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00011407s
	[INFO] 10.244.0.3:41894 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000140377s
	[INFO] 10.244.0.3:40760 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000132699s
	[INFO] 10.244.0.3:37622 - 5 "PTR IN 1.67.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000097943s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [513df62eec3d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59311 - 41845 "HINFO IN 6854891090202188984.7957026021720121455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009982044s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[445986774]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[445986774]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[445986774]: [30.001125532s] [30.001125532s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1234663045]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[1234663045]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:56:11.806)
	Trace[1234663045]: [30.001264536s] [30.001264536s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[889784802]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[889784802]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[889784802]: [30.001227605s] [30.001227605s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 18:59:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:55:40 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 8fec5e25fede4a85b02ed21e485f5a15
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     26m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         26m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 26m                    kube-proxy       
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  26m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  26m                    kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26m                    kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26m                    kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  Starting                 26m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                    node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	  Normal  Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m45s (x8 over 3m45s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m45s (x8 over 3m45s)  kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m45s (x7 over 3m45s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000110] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +1.009162] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000007] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.004064] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000005] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +2.011784] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000023] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.003972] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000004] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +4.063705] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000000] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000007] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000000] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000009] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +8.187381] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000015] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [5f53f367ebd9] <==
	{"level":"info","ts":"2024-05-22T18:32:33.366485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.366593Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.36662Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:32:33.367842Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:32:33.367845Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"warn","ts":"2024-05-22T18:33:12.340568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.136189ms","expected-duration":"100ms","prefix":"","request":"header:<ID:2289956859669261164 > lease_revoke:<id:1fc78fa1938e2adc>","response":"size:29"}
	{"level":"info","ts":"2024-05-22T18:42:33.669298Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":669}
	{"level":"info","ts":"2024-05-22T18:42:33.674226Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":669,"took":"4.650962ms","hash":2988179383,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2105344,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-05-22T18:42:33.674261Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2988179383,"revision":669,"compact-revision":-1}
	{"level":"info","ts":"2024-05-22T18:47:33.674441Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":911}
	{"level":"info","ts":"2024-05-22T18:47:33.676887Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":911,"took":"2.169071ms","hash":3399617496,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1507328,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:47:33.676921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3399617496,"revision":911,"compact-revision":669}
	{"level":"info","ts":"2024-05-22T18:52:33.678754Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1153}
	{"level":"info","ts":"2024-05-22T18:52:33.681122Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":1153,"took":"2.100554ms","hash":435437424,"current-db-size-bytes":2105344,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":1474560,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2024-05-22T18:52:33.681165Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":435437424,"revision":1153,"compact-revision":911}
	{"level":"info","ts":"2024-05-22T18:55:19.441272Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:55:19.441345Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:55:19.441469Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.441514Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.443085Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:55:19.443188Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:55:19.454136Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-05-22T18:55:19.456177Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:19.456334Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:19.456374Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> etcd [eefaf11c384e] <==
	{"level":"info","ts":"2024-05-22T18:55:37.977587Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:55:37.977682Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:55:37.977702Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:55:38.043619Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.043748Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.043762Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:55:38.048062Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:55:38.048115Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:38.048236Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:55:38.05031Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:55:38.050381Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:55:38.967217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.96734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.969829Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:55:38.969867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.969858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.970074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.970142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.971872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-05-22T18:55:38.971922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:59:21 up  1:41,  0 users,  load average: 0.21, 0.31, 0.32
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [43dd6bc557dd] <==
	I0522 18:57:12.520205       1 main.go:227] handling current node
	I0522 18:57:22.532061       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:22.532090       1 main.go:227] handling current node
	I0522 18:57:32.535480       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:32.535503       1 main.go:227] handling current node
	I0522 18:57:42.538762       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:42.538784       1 main.go:227] handling current node
	I0522 18:57:52.543959       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:52.543983       1 main.go:227] handling current node
	I0522 18:58:02.555704       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:02.555731       1 main.go:227] handling current node
	I0522 18:58:12.559567       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:12.559591       1 main.go:227] handling current node
	I0522 18:58:22.568605       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:22.568626       1 main.go:227] handling current node
	I0522 18:58:32.571486       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:32.571508       1 main.go:227] handling current node
	I0522 18:58:42.574487       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:42.574512       1 main.go:227] handling current node
	I0522 18:58:52.586300       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:52.586327       1 main.go:227] handling current node
	I0522 18:59:02.589702       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:02.589723       1 main.go:227] handling current node
	I0522 18:59:12.601733       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:12.601755       1 main.go:227] handling current node
	
	
	==> kindnet [80553b93f7ea] <==
	I0522 18:53:16.849062       1 main.go:227] handling current node
	I0522 18:53:26.852270       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:26.852292       1 main.go:227] handling current node
	I0522 18:53:36.861628       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:36.861651       1 main.go:227] handling current node
	I0522 18:53:46.865179       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:46.865201       1 main.go:227] handling current node
	I0522 18:53:56.868146       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:53:56.868167       1 main.go:227] handling current node
	I0522 18:54:06.871251       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:06.871301       1 main.go:227] handling current node
	I0522 18:54:16.877176       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:16.877198       1 main.go:227] handling current node
	I0522 18:54:26.880323       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:26.880354       1 main.go:227] handling current node
	I0522 18:54:36.882866       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:36.882888       1 main.go:227] handling current node
	I0522 18:54:46.886203       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:46.886223       1 main.go:227] handling current node
	I0522 18:54:56.888938       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:54:56.888961       1 main.go:227] handling current node
	I0522 18:55:06.893856       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:06.893878       1 main.go:227] handling current node
	I0522 18:55:16.902298       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:55:16.902328       1 main.go:227] handling current node
	
	
	==> kube-apiserver [495d862fbc88] <==
	I0522 18:55:39.878184       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0522 18:55:39.879367       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0522 18:55:39.878229       1 naming_controller.go:291] Starting NamingConditionController
	I0522 18:55:39.878252       1 controller.go:139] Starting OpenAPI controller
	I0522 18:55:39.878272       1 controller.go:87] Starting OpenAPI V3 controller
	I0522 18:55:40.047494       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:55:40.047771       1 policy_source.go:224] refreshing policies
	I0522 18:55:40.048571       1 shared_informer.go:320] Caches are synced for configmaps
	I0522 18:55:40.051831       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0522 18:55:40.057053       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0522 18:55:40.057086       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 18:55:40.057103       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0522 18:55:40.057166       1 aggregator.go:165] initial CRD sync complete...
	I0522 18:55:40.057226       1 autoregister_controller.go:141] Starting autoregister controller
	I0522 18:55:40.057259       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0522 18:55:40.057286       1 cache.go:39] Caches are synced for autoregister controller
	I0522 18:55:40.057314       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0522 18:55:40.057291       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0522 18:55:40.058812       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:55:40.062976       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0522 18:55:40.073809       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0522 18:55:40.148728       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0522 18:55:40.880951       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:55:53.121182       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:55:53.171410       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [6991b35c6800] <==
	W0522 18:55:28.873894       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.877263       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.903981       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.935678       1 logging.go:59] [core] [Channel #31 SubChannel #32] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.952488       1 logging.go:59] [core] [Channel #28 SubChannel #29] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.962287       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.966797       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.983853       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:28.993570       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.022783       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.048399       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.069672       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.110404       1 logging.go:59] [core] [Channel #169 SubChannel #170] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.124921       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.158271       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.170885       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.200867       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.291229       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.307325       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.329710       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.376916       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.387751       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.387856       1 logging.go:59] [core] [Channel #157 SubChannel #158] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.407058       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:55:29.465132       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [06715a769ee4] <==
	I0522 18:32:51.508227       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:32:51.510492       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0522 18:32:51.731580       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.201511ms"
	I0522 18:32:51.744266       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="12.636016ms"
	I0522 18:32:51.744358       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="54.827µs"
	I0522 18:32:51.753010       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="55.206µs"
	I0522 18:32:51.948128       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985116       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:32:51.985147       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:32:52.659973       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="84.558918ms"
	I0522 18:32:52.665248       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="5.235378ms"
	I0522 18:32:52.665347       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.276µs"
	I0522 18:32:53.988679       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.736µs"
	I0522 18:32:54.012021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="70.111µs"
	I0522 18:33:06.349520       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="85.376µs"
	I0522 18:33:07.116283       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="91.666µs"
	I0522 18:33:07.118724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="60.754µs"
	I0522 18:33:07.134455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.106291ms"
	I0522 18:33:07.134566       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="61.831µs"
	I0522 18:36:27.123251       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.253947ms"
	I0522 18:36:27.133722       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="10.381144ms"
	I0522 18:36:27.133807       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="43.98µs"
	I0522 18:36:27.133845       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.606µs"
	I0522 18:36:30.202749       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.775378ms"
	I0522 18:36:30.202822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="34.162µs"
	
	
	==> kube-controller-manager [f57ae1200385] <==
	I0522 18:55:52.857093       1 shared_informer.go:320] Caches are synced for daemon sets
	I0522 18:55:52.858302       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:55:52.861613       1 shared_informer.go:320] Caches are synced for disruption
	I0522 18:55:52.862744       1 shared_informer.go:320] Caches are synced for stateful set
	I0522 18:55:52.868272       1 shared_informer.go:320] Caches are synced for attach detach
	I0522 18:55:52.868302       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0522 18:55:52.868326       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0522 18:55:52.868368       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0522 18:55:52.869529       1 shared_informer.go:320] Caches are synced for expand
	I0522 18:55:52.876074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.118981ms"
	I0522 18:55:52.876376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.821µs"
	I0522 18:55:52.907328       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0522 18:55:52.918533       1 shared_informer.go:320] Caches are synced for crt configmap
	I0522 18:55:52.953194       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0522 18:55:52.966173       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:52.967979       1 shared_informer.go:320] Caches are synced for job
	I0522 18:55:52.972293       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:53.014552       1 shared_informer.go:320] Caches are synced for cronjob
	I0522 18:55:53.051331       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0522 18:55:53.055811       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 18:55:53.485614       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518148       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:56:15.529637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.104196ms"
	I0522 18:56:15.529730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.444µs"
	
	
	==> kube-proxy [4394527287d9] <==
	I0522 18:32:52.765339       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:32:52.773214       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:32:52.863568       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:32:52.863623       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:32:52.866266       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:32:52.866293       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:32:52.866319       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:32:52.866617       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:32:52.866651       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:32:52.867723       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:32:52.867759       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:32:52.867836       1 config.go:192] "Starting service config controller"
	I0522 18:32:52.867850       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:32:52.868004       1 config.go:319] "Starting node config controller"
	I0522 18:32:52.868024       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:32:52.968314       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:32:52.968337       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:32:52.968365       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9e66337e0a3b] <==
	I0522 18:55:41.578062       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:55:41.643800       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:55:41.666145       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:55:41.666189       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:55:41.668333       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:55:41.668357       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:55:41.668379       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:55:41.668660       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:55:41.668683       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:41.669565       1 config.go:192] "Starting service config controller"
	I0522 18:55:41.669588       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:55:41.669604       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:55:41.669612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:55:41.669709       1 config.go:319] "Starting node config controller"
	I0522 18:55:41.669715       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:55:41.770605       1 shared_informer.go:320] Caches are synced for node config
	I0522 18:55:41.770630       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:55:41.770656       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [94cf43c9c185] <==
	I0522 18:55:38.604832       1 serving.go:380] Generated self-signed cert in-memory
	W0522 18:55:39.946054       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0522 18:55:39.946095       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0522 18:55:39.946107       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0522 18:55:39.946116       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0522 18:55:39.960130       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0522 18:55:39.960159       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:39.962719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0522 18:55:39.962851       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0522 18:55:39.962872       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:55:39.962893       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0522 18:55:40.163188       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [967d2411643e] <==
	E0522 18:32:35.377638       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:35.377343       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0522 18:32:35.377658       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0522 18:32:35.377663       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:35.377691       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:35.377334       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377733       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:35.377311       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.210782       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.210838       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.261860       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0522 18:32:36.261900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0522 18:32:36.334219       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0522 18:32:36.334265       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0522 18:32:36.410692       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0522 18:32:36.410743       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0522 18:32:36.421607       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0522 18:32:36.421647       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0522 18:32:36.441747       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0522 18:32:36.441788       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0522 18:32:36.585580       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0522 18:32:36.585620       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0522 18:32:38.764525       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:55:19.470767       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0522 18:55:19.470986       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.852993    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75-xtables-lock\") pod \"kube-proxy-kqtgj\" (UID: \"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75\") " pod="kube-system/kube-proxy-kqtgj"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853021    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5d953629-c86b-47be-84da-baa3bdf24d2e-tmp\") pod \"storage-provisioner\" (UID: \"5d953629-c86b-47be-84da-baa3bdf24d2e\") " pod="kube-system/storage-provisioner"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853045    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e454b0cd-e618-4268-8882-69d2a4544917-lib-modules\") pod \"kindnet-qpfbl\" (UID: \"e454b0cd-e618-4268-8882-69d2a4544917\") " pod="kube-system/kindnet-qpfbl"
	May 22 18:55:40 multinode-737786 kubelet[1392]: I0522 18:55:40.853133    1392 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75-lib-modules\") pod \"kube-proxy-kqtgj\" (UID: \"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75\") " pod="kube-system/kube-proxy-kqtgj"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.263222    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f7044f4a3341c31c26a26c9a54148b5edf783501f39de034de125ea0756da88"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.269254    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1d360112edd5f1fefe695c76c60c4bcb6ff37c4ff1d3557141f077bc1d13ec"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.556834    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.565710    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1"
	May 22 18:55:41 multinode-737786 kubelet[1392]: I0522 18:55:41.577338    1392 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6b2b3d758240c7c593442266ca02c7d49dce426e0b92147a72b5a13d59d90d0"
	May 22 18:55:43 multinode-737786 kubelet[1392]: I0522 18:55:43.640639    1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:55:45 multinode-737786 kubelet[1392]: I0522 18:55:45.511732    1392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:55:46 multinode-737786 kubelet[1392]: E0522 18:55:46.826592    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:55:46 multinode-737786 kubelet[1392]: E0522 18:55:46.826655    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:55:56 multinode-737786 kubelet[1392]: E0522 18:55:56.845259    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:55:56 multinode-737786 kubelet[1392]: E0522 18:55:56.845308    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:06 multinode-737786 kubelet[1392]: E0522 18:56:06.868571    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:06 multinode-737786 kubelet[1392]: E0522 18:56:06.868605    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:11 multinode-737786 kubelet[1392]: I0522 18:56:11.854232    1392 scope.go:117] "RemoveContainer" containerID="16cb7c11afec8ec9106f148ae63dd8087aa03a7f81026fff036097da39aab0cb"
	May 22 18:56:11 multinode-737786 kubelet[1392]: I0522 18:56:11.854547    1392 scope.go:117] "RemoveContainer" containerID="11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b"
	May 22 18:56:11 multinode-737786 kubelet[1392]: E0522 18:56:11.854826    1392 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d953629-c86b-47be-84da-baa3bdf24d2e)\"" pod="kube-system/storage-provisioner" podUID="5d953629-c86b-47be-84da-baa3bdf24d2e"
	May 22 18:56:16 multinode-737786 kubelet[1392]: E0522 18:56:16.885020    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:16 multinode-737786 kubelet[1392]: E0522 18:56:16.885053    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 18:56:23 multinode-737786 kubelet[1392]: I0522 18:56:23.858356    1392 scope.go:117] "RemoveContainer" containerID="11bb4599579bf5a23ef05bb4313bbd0b1ad6e971d79409ac99180f1970fef76b"
	May 22 18:56:26 multinode-737786 kubelet[1392]: E0522 18:56:26.903258    1392 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:56:26 multinode-737786 kubelet[1392]: E0522 18:56:26.903337    1392 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	
	
	==> storage-provisioner [11bb4599579b] <==
	I0522 18:55:41.545271       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0522 18:56:11.548634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [2775772a4970] <==
	I0522 18:56:23.937856       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 18:56:23.945592       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 18:56:23.945661       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 18:56:41.339602       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 18:56:41.339666       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"1656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec became leader
	I0522 18:56:41.339738       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec!
	I0522 18:56:41.439960       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_bcc31734-cd22-4159-9a62-58c7b91d38ec!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/DeleteNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  7m43s (x4 over 22m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  3m42s                default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/DeleteNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/DeleteNode (107.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (12.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-737786 stop: (11.816870988s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status: exit status 7 (88.370425ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-737786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-737786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr: exit status 7 (88.594342ms)

                                                
                                                
-- stdout --
	multinode-737786
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-737786-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	
	multinode-737786-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:59:34.491852  199699 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:59:34.492118  199699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.492128  199699 out.go:304] Setting ErrFile to fd 2...
	I0522 18:59:34.492133  199699 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.492305  199699 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:59:34.492463  199699 out.go:298] Setting JSON to false
	I0522 18:59:34.492488  199699 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:59:34.492581  199699 notify.go:220] Checking for updates...
	I0522 18:59:34.492828  199699 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:34.492842  199699 status.go:255] checking status of multinode-737786 ...
	I0522 18:59:34.493206  199699 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:34.509712  199699 status.go:330] multinode-737786 host status = "Stopped" (err=<nil>)
	I0522 18:59:34.509751  199699 status.go:343] host is not running, skipping remaining checks
	I0522 18:59:34.509762  199699 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:59:34.509806  199699 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:59:34.510049  199699 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:34.525473  199699 status.go:330] multinode-737786-m02 host status = "Stopped" (err=<nil>)
	I0522 18:59:34.525493  199699 status.go:343] host is not running, skipping remaining checks
	I0522 18:59:34.525498  199699 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:59:34.525516  199699 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:59:34.525738  199699 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:59:34.541107  199699 status.go:330] multinode-737786-m03 host status = "Stopped" (err=<nil>)
	I0522 18:59:34.541125  199699 status.go:343] host is not running, skipping remaining checks
	I0522 18:59:34.541131  199699 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr": multinode-737786
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-737786-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-737786-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-737786 status --alsologtostderr": multinode-737786
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
multinode-737786-m02
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
multinode-737786-m03
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 130,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:55:30.428700973Z",
	            "FinishedAt": "2024-05-22T18:59:34.127787675Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e5d9c4f018f85e131e1e3e35160c3be5874cc3e9e983a114ff800193704e1cf",
	            "SandboxKey": "/var/run/docker/netns/5e5d9c4f018f",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786: exit status 7 (59.62154ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "multinode-737786" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestMultiNode/serial/StopMultiNode (12.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (121.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-737786 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-737786 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: exit status 80 (2m0.117357405s)

                                                
                                                
-- stdout --
	* [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "multinode-737786" ...
	* Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: default-storageclass, storage-provisioner
	
	* Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	* Pulling base image v0.0.44-1715707529-18887 ...
	* Restarting existing docker container for "multinode-737786-m02" ...
	* Updating the running docker "multinode-737786-m02" container ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:59:34.657325  199775 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:59:34.657554  199775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.657562  199775 out.go:304] Setting ErrFile to fd 2...
	I0522 18:59:34.657566  199775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.657720  199775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:59:34.658203  199775 out.go:298] Setting JSON to false
	I0522 18:59:34.659121  199775 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6119,"bootTime":1716398256,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:59:34.659169  199775 start.go:139] virtualization: kvm guest
	I0522 18:59:34.661548  199775 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:59:34.663012  199775 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:59:34.664309  199775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:59:34.662986  199775 notify.go:220] Checking for updates...
	I0522 18:59:34.666671  199775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:34.667892  199775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:59:34.669173  199775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:59:34.670352  199775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:59:34.671839  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:34.672242  199775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:59:34.692956  199775 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:59:34.693064  199775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:59:34.736414  199775 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:59:34.727578342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:59:34.736517  199775 docker.go:295] overlay module found
	I0522 18:59:34.739217  199775 out.go:177] * Using the docker driver based on existing profile
	I0522 18:59:34.740504  199775 start.go:297] selected driver: docker
	I0522 18:59:34.740520  199775 start.go:901] validating driver "docker" against &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:34.740614  199775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:59:34.740679  199775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:59:34.783935  199775 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:59:34.77502677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:59:34.784481  199775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:59:34.784544  199775 cni.go:84] Creating CNI manager for ""
	I0522 18:59:34.784556  199775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:59:34.784592  199775 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:34.787213  199775 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:59:34.788345  199775 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:59:34.789681  199775 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:59:34.790759  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:34.790785  199775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:59:34.790797  199775 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:59:34.790805  199775 cache.go:56] Caching tarball of preloaded images
	I0522 18:59:34.790877  199775 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:59:34.790889  199775 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:59:34.790995  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:34.805492  199775 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:59:34.805534  199775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:59:34.805553  199775 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:59:34.805602  199775 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:34.805679  199775 start.go:364] duration metric: took 53.78µs to acquireMachinesLock for "multinode-737786"
	I0522 18:59:34.805703  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:34.805717  199775 fix.go:54] fixHost starting: 
	I0522 18:59:34.805959  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:34.821314  199775 fix.go:112] recreateIfNeeded on multinode-737786: state=Stopped err=<nil>
	W0522 18:59:34.821342  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:34.823171  199775 out.go:177] * Restarting existing docker container for "multinode-737786" ...
	I0522 18:59:34.824579  199775 cli_runner.go:164] Run: docker start multinode-737786
	I0522 18:59:35.079040  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:35.096715  199775 kic.go:430] container "multinode-737786" state is running.
	I0522 18:59:35.097082  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:35.113368  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:35.113586  199775 machine.go:94] provisionDockerMachine start ...
	I0522 18:59:35.113653  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:35.130868  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:35.131109  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:35.131128  199775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:59:35.131715  199775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50388->127.0.0.1:32937: read: connection reset by peer
	I0522 18:59:38.242318  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:59:38.242343  199775 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:59:38.242404  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.258417  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.258580  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.258592  199775 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:59:38.380649  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:59:38.380723  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.396574  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.396746  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.396762  199775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:59:38.507150  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:38.507179  199775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:59:38.507193  199775 ubuntu.go:177] setting up certificates
	I0522 18:59:38.507220  199775 provision.go:84] configureAuth start
	I0522 18:59:38.507285  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:38.524413  199775 provision.go:143] copyHostCerts
	I0522 18:59:38.524446  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:59:38.524474  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:59:38.524488  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:59:38.524565  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:59:38.524641  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:59:38.524659  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:59:38.524663  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:59:38.524690  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:59:38.524730  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:59:38.524746  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:59:38.524753  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:59:38.524780  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:59:38.524822  199775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:59:38.661121  199775 provision.go:177] copyRemoteCerts
	I0522 18:59:38.661175  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:59:38.661206  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.676916  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:38.759116  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:59:38.759181  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:59:38.779102  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:59:38.779146  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:59:38.799080  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:59:38.799127  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:59:38.819078  199775 provision.go:87] duration metric: took 311.841874ms to configureAuth
	I0522 18:59:38.819110  199775 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:59:38.819264  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:38.819329  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.835148  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.835384  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.835400  199775 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:59:38.947293  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:59:38.947317  199775 ubuntu.go:71] root file system type: overlay
	I0522 18:59:38.947414  199775 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:59:38.947480  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.963004  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.963177  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.963236  199775 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:59:39.085599  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:59:39.085656  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.101979  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:39.102166  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:39.102183  199775 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:59:39.219901  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:39.219926  199775 machine.go:97] duration metric: took 4.106321609s to provisionDockerMachine
	I0522 18:59:39.219936  199775 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:59:39.219951  199775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:59:39.220008  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:59:39.220045  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.236332  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.319285  199775 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:59:39.322035  199775 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:59:39.322050  199775 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:59:39.322057  199775 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:59:39.322073  199775 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:59:39.322080  199775 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:59:39.322086  199775 command_runner.go:130] > ID=ubuntu
	I0522 18:59:39.322091  199775 command_runner.go:130] > ID_LIKE=debian
	I0522 18:59:39.322097  199775 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:59:39.322102  199775 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:59:39.322108  199775 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:59:39.322114  199775 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:59:39.322120  199775 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:59:39.322169  199775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:59:39.322215  199775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:59:39.322228  199775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:59:39.322236  199775 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:59:39.322251  199775 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:59:39.322307  199775 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:59:39.322403  199775 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:59:39.322416  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:59:39.322524  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:59:39.329906  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:59:39.350136  199775 start.go:296] duration metric: took 130.188186ms for postStartSetup
	I0522 18:59:39.350206  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:59:39.350258  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.365900  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.447364  199775 command_runner.go:130] > 27%
	I0522 18:59:39.447616  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:59:39.451318  199775 command_runner.go:130] > 213G
	I0522 18:59:39.451499  199775 fix.go:56] duration metric: took 4.64578222s for fixHost
	I0522 18:59:39.451522  199775 start.go:83] releasing machines lock for "multinode-737786", held for 4.645827696s
	I0522 18:59:39.451586  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:39.466833  199775 ssh_runner.go:195] Run: cat /version.json
	I0522 18:59:39.466877  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.466958  199775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:59:39.467022  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.483558  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.484707  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.629310  199775 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:59:39.629370  199775 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:59:39.629493  199775 ssh_runner.go:195] Run: systemctl --version
	I0522 18:59:39.633438  199775 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:59:39.633463  199775 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:59:39.633523  199775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:59:39.637125  199775 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:59:39.637144  199775 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:59:39.637150  199775 command_runner.go:130] > Device: 37h/55d	Inode: 1307236     Links: 1
	I0522 18:59:39.637156  199775 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:59:39.637162  199775 command_runner.go:130] > Access: 2024-05-22 18:59:35.468422925 +0000
	I0522 18:59:39.637166  199775 command_runner.go:130] > Modify: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637171  199775 command_runner.go:130] > Change: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637176  199775 command_runner.go:130] >  Birth: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637344  199775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:59:39.652801  199775 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:59:39.652866  199775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:59:39.660640  199775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:59:39.660665  199775 start.go:494] detecting cgroup driver to use...
	I0522 18:59:39.660698  199775 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:59:39.660801  199775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:59:39.674202  199775 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:59:39.674271  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:59:39.682295  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:59:39.690505  199775 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:59:39.690543  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:59:39.698420  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:59:39.706192  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:59:39.713967  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:59:39.721725  199775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:59:39.729168  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:59:39.736990  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:59:39.745046  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:59:39.752971  199775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:59:39.759103  199775 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:59:39.759772  199775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:59:39.766405  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:39.842699  199775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:59:39.913692  199775 start.go:494] detecting cgroup driver to use...
	I0522 18:59:39.913745  199775 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:59:39.913793  199775 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:59:39.923110  199775 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:59:39.923130  199775 command_runner.go:130] > [Unit]
	I0522 18:59:39.923140  199775 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:59:39.923148  199775 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:59:39.923154  199775 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:59:39.923166  199775 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:59:39.923172  199775 command_runner.go:130] > Wants=network-online.target
	I0522 18:59:39.923182  199775 command_runner.go:130] > Requires=docker.socket
	I0522 18:59:39.923191  199775 command_runner.go:130] > StartLimitBurst=3
	I0522 18:59:39.923202  199775 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:59:39.923210  199775 command_runner.go:130] > [Service]
	I0522 18:59:39.923216  199775 command_runner.go:130] > Type=notify
	I0522 18:59:39.923226  199775 command_runner.go:130] > Restart=on-failure
	I0522 18:59:39.923238  199775 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:59:39.923254  199775 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:59:39.923283  199775 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:59:39.923300  199775 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:59:39.923311  199775 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:59:39.923324  199775 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:59:39.923340  199775 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:59:39.923358  199775 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:59:39.923373  199775 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:59:39.923382  199775 command_runner.go:130] > ExecStart=
	I0522 18:59:39.923403  199775 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:59:39.923415  199775 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:59:39.923427  199775 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:59:39.923440  199775 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:59:39.923450  199775 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:59:39.923460  199775 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:59:39.923468  199775 command_runner.go:130] > LimitCORE=infinity
	I0522 18:59:39.923479  199775 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:59:39.923492  199775 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:59:39.923501  199775 command_runner.go:130] > TasksMax=infinity
	I0522 18:59:39.923510  199775 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:59:39.923520  199775 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:59:39.923529  199775 command_runner.go:130] > Delegate=yes
	I0522 18:59:39.923540  199775 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:59:39.923551  199775 command_runner.go:130] > KillMode=process
	I0522 18:59:39.923564  199775 command_runner.go:130] > [Install]
	I0522 18:59:39.923574  199775 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:59:39.924050  199775 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:59:39.924104  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:59:39.935210  199775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:59:39.950982  199775 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:59:39.951036  199775 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:59:39.953987  199775 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:59:39.954086  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:59:39.961541  199775 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:59:39.978646  199775 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:59:40.087121  199775 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:59:40.186524  199775 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:59:40.186634  199775 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:59:40.202646  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.287896  199775 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:59:40.586647  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:59:40.596511  199775 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:59:40.606653  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:59:40.615578  199775 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:59:40.689807  199775 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:59:40.760812  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.832348  199775 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:59:40.843985  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:59:40.853262  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.919086  199775 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:59:40.978856  199775 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:59:40.978933  199775 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:59:40.982421  199775 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:59:40.982447  199775 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:59:40.982457  199775 command_runner.go:130] > Device: 40h/64d	Inode: 218         Links: 1
	I0522 18:59:40.982466  199775 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:59:40.982479  199775 command_runner.go:130] > Access: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982488  199775 command_runner.go:130] > Modify: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982495  199775 command_runner.go:130] > Change: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982504  199775 command_runner.go:130] >  Birth: -
	I0522 18:59:40.982534  199775 start.go:562] Will wait 60s for crictl version
	I0522 18:59:40.982578  199775 ssh_runner.go:195] Run: which crictl
	I0522 18:59:40.985607  199775 command_runner.go:130] > /usr/bin/crictl
	I0522 18:59:40.985671  199775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:59:41.013467  199775 command_runner.go:130] > Version:  0.1.0
	I0522 18:59:41.013490  199775 command_runner.go:130] > RuntimeName:  docker
	I0522 18:59:41.013494  199775 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:59:41.013499  199775 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:59:41.015520  199775 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:59:41.015567  199775 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:59:41.036936  199775 command_runner.go:130] > 26.1.2
	I0522 18:59:41.037003  199775 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:59:41.057023  199775 command_runner.go:130] > 26.1.2
	I0522 18:59:41.060284  199775 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:59:41.060360  199775 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:59:41.075514  199775 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:59:41.078871  199775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:59:41.088506  199775 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:59:41.088614  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:41.088651  199775 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:59:41.104540  199775 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:59:41.104560  199775 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:59:41.104567  199775 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:59:41.104574  199775 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:59:41.104580  199775 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:59:41.104588  199775 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:59:41.104597  199775 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:59:41.104605  199775 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:59:41.104618  199775 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:41.104632  199775 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:59:41.105493  199775 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:59:41.105511  199775 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:59:41.105571  199775 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:59:41.121068  199775 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:59:41.121088  199775 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:59:41.121095  199775 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:59:41.121103  199775 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:59:41.121109  199775 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:59:41.121118  199775 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:59:41.121136  199775 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:59:41.121147  199775 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:59:41.121160  199775 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:41.121171  199775 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:59:41.122034  199775 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:59:41.122058  199775 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:59:41.122068  199775 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:59:41.122191  199775 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:59:41.122249  199775 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:59:41.164777  199775 command_runner.go:130] > cgroupfs
	I0522 18:59:41.166168  199775 cni.go:84] Creating CNI manager for ""
	I0522 18:59:41.166182  199775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:59:41.166202  199775 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:59:41.166231  199775 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:59:41.166360  199775 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:59:41.166412  199775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:59:41.173501  199775 command_runner.go:130] > kubeadm
	I0522 18:59:41.173516  199775 command_runner.go:130] > kubectl
	I0522 18:59:41.173521  199775 command_runner.go:130] > kubelet
	I0522 18:59:41.174225  199775 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:59:41.174277  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:59:41.181641  199775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:59:41.196590  199775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:59:41.211515  199775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:59:41.226228  199775 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:59:41.229089  199775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:59:41.238030  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:41.308903  199775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:59:41.320466  199775 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:59:41.320483  199775 certs.go:194] generating shared ca certs ...
	I0522 18:59:41.320502  199775 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:41.320646  199775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:59:41.320698  199775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:59:41.320711  199775 certs.go:256] generating profile certs ...
	I0522 18:59:41.320806  199775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:59:41.320870  199775 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:59:41.320924  199775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:59:41.320936  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:59:41.320952  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:59:41.320973  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:59:41.320987  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:59:41.321000  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:59:41.321014  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:59:41.321029  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:59:41.321047  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:59:41.321101  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:59:41.321137  199775 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:59:41.321150  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:59:41.321182  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:59:41.321210  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:59:41.321267  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:59:41.321326  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:59:41.321362  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.321379  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.321399  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.322191  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:59:41.343282  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:59:41.364358  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:59:41.389503  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:59:41.466649  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:59:41.548665  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:59:41.576097  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:59:41.663618  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:59:41.686294  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:59:41.708614  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:59:41.755450  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:59:41.777856  199775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:59:41.792889  199775 ssh_runner.go:195] Run: openssl version
	I0522 18:59:41.797633  199775 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:59:41.797704  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:59:41.805732  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808853  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808893  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808935  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.814721  199775 command_runner.go:130] > 3ec20f2e
	I0522 18:59:41.814945  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:59:41.822355  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:59:41.830234  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833113  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833152  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833193  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.838909  199775 command_runner.go:130] > b5213941
	I0522 18:59:41.838964  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:59:41.846130  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:59:41.854027  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.856957  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.856978  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.857008  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.863854  199775 command_runner.go:130] > 51391683
	I0522 18:59:41.864155  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:59:41.872963  199775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:59:41.876441  199775 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:59:41.876466  199775 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0522 18:59:41.876476  199775 command_runner.go:130] > Device: 801h/2049d	Inode: 1307017     Links: 1
	I0522 18:59:41.876485  199775 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:59:41.876495  199775 command_runner.go:130] > Access: 2024-05-22 18:55:37.083187616 +0000
	I0522 18:59:41.876502  199775 command_runner.go:130] > Modify: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876513  199775 command_runner.go:130] > Change: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876522  199775 command_runner.go:130] >  Birth: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876589  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:59:41.884263  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.884568  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:59:41.891332  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.891516  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:59:41.897217  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.897376  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:59:41.903657  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.903908  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:59:41.909914  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.910175  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:59:41.917460  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.917518  199775 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:41.917656  199775 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:59:41.960301  199775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:59:41.969598  199775 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0522 18:59:41.969631  199775 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0522 18:59:41.969641  199775 command_runner.go:130] > /var/lib/minikube/etcd:
	I0522 18:59:41.969648  199775 command_runner.go:130] > member
	W0522 18:59:41.970788  199775 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:59:41.970807  199775 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:59:41.970813  199775 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:59:41.970855  199775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:59:41.982721  199775 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:59:41.983078  199775 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-737786" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:41.983181  199775 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-737786" cluster setting kubeconfig missing "multinode-737786" context setting]
	I0522 18:59:41.983550  199775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:41.984046  199775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:41.984249  199775 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:59:41.984661  199775 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:59:41.984900  199775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:59:42.054179  199775 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.67.2
	I0522 18:59:42.054210  199775 kubeadm.go:591] duration metric: took 83.392356ms to restartPrimaryControlPlane
	I0522 18:59:42.054219  199775 kubeadm.go:393] duration metric: took 136.705239ms to StartCluster
	I0522 18:59:42.054237  199775 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:42.054314  199775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:42.054846  199775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:42.055084  199775 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:59:42.057350  199775 out.go:177] * Verifying Kubernetes components...
	I0522 18:59:42.055335  199775 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:59:42.055409  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:42.057426  199775 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:59:42.057464  199775 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	W0522 18:59:42.058679  199775 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:59:42.057469  199775 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:59:42.058657  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:42.058798  199775 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:42.058805  199775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:59:42.059102  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.059303  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.084954  199775 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:42.083978  199775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:42.085298  199775 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:59:42.086409  199775 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:42.086425  199775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:59:42.086473  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:42.086612  199775 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	W0522 18:59:42.086623  199775 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:59:42.086647  199775 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:42.087068  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.108554  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:42.108724  199775 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:42.108740  199775 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:59:42.108801  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:42.124067  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:42.349892  199775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:59:42.365487  199775 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:59:42.365636  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:42.365650  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:42.365661  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:42.365667  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:42.365926  199775 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:59:42.365955  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:42.368759  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:42.445191  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:42.747644  199775 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:59:42.747689  199775 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747716  199775 retry.go:31] will retry after 271.82269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747779  199775 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:59:42.747804  199775 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747817  199775 retry.go:31] will retry after 351.337067ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.865881  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:42.865907  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:42.865918  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:42.865925  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:43.020621  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:43.099952  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:44.953522  199775 round_trippers.go:574] Response Status: 200 OK in 2087 milliseconds
	I0522 18:59:44.953553  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.953563  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.953568  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.953573  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.953577  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.953582  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.953585  199775 round_trippers.go:580]     Audit-Id: 00349735-99e2-451d-a0ec-1bc8cad4692e
	I0522 18:59:44.954791  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:44.955615  199775 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:59:44.955635  199775 node_ready.go:38] duration metric: took 2.590096009s for node "multinode-737786" to be "Ready" ...
	I0522 18:59:44.955648  199775 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:59:44.955720  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:44.955726  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.955736  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.955742  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:44.962955  199775 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0522 18:59:44.962991  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.963001  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.963009  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.963013  199775 round_trippers.go:580]     Audit-Id: 95ec849a-2081-4480-8152-bae6335ebbe1
	I0522 18:59:44.963017  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.963022  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.963026  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.963637  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1783"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1637","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 59082 chars]
	I0522 18:59:44.968126  199775 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:44.968231  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:59:44.968241  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.968251  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.968260  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:44.969901  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:44.969916  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.969925  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.969930  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.969936  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.969940  199775 round_trippers.go:580]     Audit-Id: 790ee2f6-bbfb-4442-ad45-71a07163d279
	I0522 18:59:44.969943  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.969947  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.970163  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1637","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6448 chars]
	I0522 18:59:44.970661  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:44.970679  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.970689  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.970694  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.048873  199775 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0522 18:59:45.048896  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.048925  199775 round_trippers.go:580]     Audit-Id: ff89e442-6b26-4bf7-b902-4b2e2a86a546
	I0522 18:59:45.048930  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.048934  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.048938  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:45.048943  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:45.048948  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.049638  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.050090  199775 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.050135  199775 pod_ready.go:81] duration metric: took 81.977298ms for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.050156  199775 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.050240  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:59:45.050253  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.050263  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.050268  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.054731  199775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:59:45.054756  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.054781  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.054788  199775 round_trippers.go:580]     Audit-Id: 44b32557-8db3-40f9-9a12-a58962945a26
	I0522 18:59:45.054797  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.054804  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.054816  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.054826  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.055152  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"1612","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0522 18:59:45.055827  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.055873  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.055885  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.055897  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.057594  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.057613  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.057622  199775 round_trippers.go:580]     Audit-Id: 66d081a8-8f2f-4907-a6ce-8ffec0da4bff
	I0522 18:59:45.057627  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.057649  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.057657  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.057660  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.057665  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.057774  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.058153  199775 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.058177  199775 pod_ready.go:81] duration metric: took 8.009151ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.058193  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.058281  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:59:45.058292  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.058301  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.058305  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.060506  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:45.060537  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.060545  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.060551  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.060555  199775 round_trippers.go:580]     Audit-Id: 872a911f-72f0-4610-9933-ab1beea1fab1
	I0522 18:59:45.060561  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.060564  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.060569  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.060744  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"1621","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8740 chars]
	I0522 18:59:45.061219  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.061244  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.061267  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.061276  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.062712  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.062729  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.062739  199775 round_trippers.go:580]     Audit-Id: 38b34a7c-a076-49e9-8c7c-66c4e27de82c
	I0522 18:59:45.062745  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.062749  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.062755  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.062760  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.062764  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.062913  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.063163  199775 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.063175  199775 pod_ready.go:81] duration metric: took 4.975505ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.063184  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.063226  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:59:45.063234  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.063239  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.063243  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.064913  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.064928  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.064937  199775 round_trippers.go:580]     Audit-Id: 78545d9c-25a0-4cd2-b7d2-dd3dc3bf4092
	I0522 18:59:45.064943  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.064947  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.064951  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.064965  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.064970  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.065121  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"1617","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8313 chars]
	I0522 18:59:45.065638  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.065652  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.065662  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.065670  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.066993  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.067008  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.067016  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.067021  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.067026  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.067030  199775 round_trippers.go:580]     Audit-Id: 43a0c948-088c-41ca-afbb-eaa3b36def3b
	I0522 18:59:45.067034  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.067045  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.067153  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.067518  199775 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.067533  199775 pod_ready.go:81] duration metric: took 4.342707ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.067541  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.067580  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:59:45.067584  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.067591  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.067594  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.069029  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.069042  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.069048  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.069051  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.069054  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.069056  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.069064  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.069068  199775 round_trippers.go:580]     Audit-Id: eefd608b-03a6-4bf1-b6c7-a2cbe257b0a1
	I0522 18:59:45.069245  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"1607","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0522 18:59:45.150867  199775 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0522 18:59:45.150908  199775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.130257791s)
	I0522 18:59:45.151029  199775 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:59:45.151039  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.151051  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.151056  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.152654  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.152684  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.152692  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.152697  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.152701  199775 round_trippers.go:580]     Content-Length: 1274
	I0522 18:59:45.152705  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.152709  199775 round_trippers.go:580]     Audit-Id: 2402c5d5-4b0d-46a9-a5dd-acb12a0778a3
	I0522 18:59:45.152716  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.152720  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.152750  199775 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1787"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0522 18:59:45.153265  199775 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:59:45.153319  199775 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:59:45.153347  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.153361  199775 round_trippers.go:473]     Content-Type: application/json
	I0522 18:59:45.153369  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.153373  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.155795  199775 request.go:629] Waited for 86.18921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.155861  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.155871  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.155883  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.155891  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.156116  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:45.156135  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.156145  199775 round_trippers.go:580]     Audit-Id: ab1b87fb-f688-48fd-ac50-a321eefa04e3
	I0522 18:59:45.156152  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.156156  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.156160  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.156164  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.156169  199775 round_trippers.go:580]     Content-Length: 1220
	I0522 18:59:45.156174  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.156202  199775 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:59:45.157451  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.157473  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.157482  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.157488  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.157494  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.157499  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.157503  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.157507  199775 round_trippers.go:580]     Audit-Id: a51128b5-04d3-4748-abea-ec42a0969a69
	I0522 18:59:45.157620  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.158103  199775 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.158129  199775 pod_ready.go:81] duration metric: took 90.579489ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.158142  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.323712  199775 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0522 18:59:45.337452  199775 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0522 18:59:45.353065  199775 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:59:45.356261  199775 request.go:629] Waited for 198.062578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:59:45.356335  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:59:45.356345  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.356354  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.356360  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.358209  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.358229  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.358236  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.358240  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.358243  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.358245  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.358248  199775 round_trippers.go:580]     Audit-Id: 42473482-cac4-43d8-add8-01891a6ba4a3
	I0522 18:59:45.358251  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.358428  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"1614","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0522 18:59:45.366817  199775 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:59:45.433333  199775 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0522 18:59:45.547435  199775 command_runner.go:130] > pod/storage-provisioner configured
	I0522 18:59:45.552147  199775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.452155221s)
	I0522 18:59:45.554994  199775 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:59:45.556295  199775 addons.go:505] duration metric: took 3.500954665s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:59:45.556235  199775 request.go:629] Waited for 197.339011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.556431  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.556456  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.556475  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.556487  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.562601  199775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0522 18:59:45.562625  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.562634  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.562640  199775 round_trippers.go:580]     Audit-Id: 9b368b52-aa14-4c21-9228-188cb027a4b9
	I0522 18:59:45.562644  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.562648  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.562653  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.562656  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.562850  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1788","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.563264  199775 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.563305  199775 pod_ready.go:81] duration metric: took 405.154046ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.563318  199775 pod_ready.go:38] duration metric: took 607.658988ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:59:45.563342  199775 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:59:45.563411  199775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:59:45.577417  199775 command_runner.go:130] > 2003
	I0522 18:59:45.577457  199775 api_server.go:72] duration metric: took 3.522320494s to wait for apiserver process to appear ...
	I0522 18:59:45.577467  199775 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:59:45.577483  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:45.580888  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:59:45.580921  199775 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:59:46.078506  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:46.144573  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:59:46.144658  199775 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:59:46.577749  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:46.581996  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:59:46.582072  199775 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:59:46.582080  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.582087  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.582091  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.582847  199775 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:59:46.582863  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.582870  199775 round_trippers.go:580]     Audit-Id: 53752c00-8dc0-4bfb-8fe1-45ec40c3d2fb
	I0522 18:59:46.582874  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.582889  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.582897  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.582900  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.582904  199775 round_trippers.go:580]     Content-Length: 263
	I0522 18:59:46.582906  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.582920  199775 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:59:46.583003  199775 api_server.go:141] control plane version: v1.30.1
	I0522 18:59:46.583025  199775 api_server.go:131] duration metric: took 1.005552775s to wait for apiserver health ...
	I0522 18:59:46.583040  199775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:59:46.583094  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:46.583101  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.583118  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.583137  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.652541  199775 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0522 18:59:46.652564  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.652571  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.652575  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.652578  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.652581  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.652621  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.652627  199775 round_trippers.go:580]     Audit-Id: ee158221-6aef-4f5f-9f4c-d04d9cd99a78
	I0522 18:59:46.654310  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1793","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60493 chars]
	I0522 18:59:46.656705  199775 system_pods.go:59] 8 kube-system pods found
	I0522 18:59:46.656775  199775 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:59:46.656799  199775 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:59:46.656812  199775 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:59:46.656826  199775 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:59:46.656839  199775 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:59:46.656853  199775 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:59:46.656864  199775 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:59:46.656879  199775 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:59:46.656887  199775 system_pods.go:74] duration metric: took 73.839475ms to wait for pod list to return data ...
	I0522 18:59:46.656909  199775 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:59:46.657013  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:59:46.657031  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.657041  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.657052  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.659079  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:46.659105  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.659116  199775 round_trippers.go:580]     Audit-Id: 9e4ded67-5094-4348-a2b4-ec3e2afc8e53
	I0522 18:59:46.659123  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.659132  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.659136  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.659148  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.659152  199775 round_trippers.go:580]     Content-Length: 262
	I0522 18:59:46.659167  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.659187  199775 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:59:46.659421  199775 default_sa.go:45] found service account: "default"
	I0522 18:59:46.659446  199775 default_sa.go:55] duration metric: took 2.524633ms for default service account to be created ...
	I0522 18:59:46.659457  199775 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:59:46.659517  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:46.659532  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.659542  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.659548  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.662463  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:46.662480  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.662487  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.662491  199775 round_trippers.go:580]     Audit-Id: 80886460-303e-4011-adae-70833530b5b7
	I0522 18:59:46.662494  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.662497  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.662500  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.662503  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.663455  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1793","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60493 chars]
	I0522 18:59:46.665338  199775 system_pods.go:86] 8 kube-system pods found
	I0522 18:59:46.665361  199775 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:59:46.665368  199775 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:59:46.665375  199775 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:59:46.665392  199775 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:59:46.665398  199775 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:59:46.665414  199775 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:59:46.665425  199775 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:59:46.665433  199775 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:59:46.665440  199775 system_pods.go:126] duration metric: took 5.977627ms to wait for k8s-apps to be running ...
	I0522 18:59:46.665446  199775 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:59:46.665481  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:59:46.680943  199775 system_svc.go:56] duration metric: took 15.466416ms WaitForService to wait for kubelet
	I0522 18:59:46.681027  199775 kubeadm.go:576] duration metric: took 4.625886917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:59:46.681071  199775 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:59:46.681210  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:59:46.681238  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.681269  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.681284  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.684896  199775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:59:46.684982  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.685004  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.685037  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.685062  199775 round_trippers.go:580]     Audit-Id: 5b29536d-555e-447a-b5de-22d0f90e97ba
	I0522 18:59:46.685075  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.685087  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.685098  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.685273  199775 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1860"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1788","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 5264 chars]
	I0522 18:59:46.685771  199775 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:59:46.685837  199775 node_conditions.go:123] node cpu capacity is 8
	I0522 18:59:46.685862  199775 node_conditions.go:105] duration metric: took 4.759995ms to run NodePressure ...
	I0522 18:59:46.685884  199775 start.go:240] waiting for startup goroutines ...
	I0522 18:59:46.685918  199775 start.go:245] waiting for cluster config update ...
	I0522 18:59:46.685946  199775 start.go:254] writing updated cluster config ...
	I0522 18:59:46.689398  199775 out.go:177] 
	I0522 18:59:46.690789  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:46.690929  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:46.692402  199775 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:59:46.693438  199775 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:59:46.694568  199775 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:59:46.695615  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:46.695636  199775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:59:46.695648  199775 cache.go:56] Caching tarball of preloaded images
	I0522 18:59:46.695725  199775 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:59:46.695732  199775 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:59:46.695807  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:46.710779  199775 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:59:46.710810  199775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:59:46.710821  199775 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:59:46.710843  199775 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:46.710893  199775 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:59:46.710909  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:46.710913  199775 fix.go:54] fixHost starting: m02
	I0522 18:59:46.711144  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:46.726425  199775 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Stopped err=<nil>
	W0522 18:59:46.726450  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:46.728022  199775 out.go:177] * Restarting existing docker container for "multinode-737786-m02" ...
	I0522 18:59:46.729121  199775 cli_runner.go:164] Run: docker start multinode-737786-m02
	I0522 18:59:47.097738  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:47.114644  199775 kic.go:430] container "multinode-737786-m02" state is running.
	I0522 18:59:47.114997  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:47.131986  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:59:47.132026  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:47.148365  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32942 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	W0522 18:59:47.149133  199775 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:32804->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.149165  199775 retry.go:31] will retry after 266.09716ms: ssh: handshake failed: read tcp 127.0.0.1:32804->127.0.0.1:32942: read: connection reset by peer
	W0522 18:59:47.415745  199775 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:32818->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.415789  199775 retry.go:31] will retry after 300.769199ms: ssh: handshake failed: read tcp 127.0.0.1:32818->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.799305  199775 command_runner.go:130] > 27%
	I0522 18:59:47.799558  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:59:47.803128  199775 command_runner.go:130] > 213G
	I0522 18:59:47.803504  199775 fix.go:56] duration metric: took 1.092584564s for fixHost
	I0522 18:59:47.803525  199775 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1.09262176s
	W0522 18:59:47.803543  199775 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:59:47.803607  199775 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:59:47.803619  199775 start.go:728] Will try again in 5 seconds ...
	I0522 18:59:52.804451  199775 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:52.804590  199775 start.go:364] duration metric: took 107.122µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:59:52.804617  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:52.804626  199775 fix.go:54] fixHost starting: m02
	I0522 18:59:52.804865  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:52.821071  199775 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Running err=<nil>
	W0522 18:59:52.821099  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:52.823752  199775 out.go:177] * Updating the running docker "multinode-737786-m02" container ...
	I0522 18:59:52.824906  199775 machine.go:94] provisionDockerMachine start ...
	I0522 18:59:52.824977  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:52.840560  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:52.840715  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:52.840727  199775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:59:52.954372  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:59:52.954401  199775 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:59:52.954462  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:52.970241  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:52.970424  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:52.970443  199775 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:59:53.093444  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:59:53.093511  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:53.109566  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:53.109727  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:53.109744  199775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:59:53.227092  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:53.227117  199775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:59:53.227130  199775 ubuntu.go:177] setting up certificates
	I0522 18:59:53.227142  199775 provision.go:84] configureAuth start
	I0522 18:59:53.227193  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.242688  199775 provision.go:87] duration metric: took 15.538812ms to configureAuth
	W0522 18:59:53.242709  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.242725  199775 retry.go:31] will retry after 79.472µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.243838  199775 provision.go:84] configureAuth start
	I0522 18:59:53.243893  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.259086  199775 provision.go:87] duration metric: took 15.229784ms to configureAuth
	W0522 18:59:53.259105  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.259122  199775 retry.go:31] will retry after 200.248µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.260190  199775 provision.go:84] configureAuth start
	I0522 18:59:53.260251  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.276300  199775 provision.go:87] duration metric: took 16.090219ms to configureAuth
	W0522 18:59:53.276322  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.276342  199775 retry.go:31] will retry after 158.856µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.277452  199775 provision.go:84] configureAuth start
	I0522 18:59:53.277509  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.292719  199775 provision.go:87] duration metric: took 15.24981ms to configureAuth
	W0522 18:59:53.292736  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.292752  199775 retry.go:31] will retry after 210.436µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.293867  199775 provision.go:84] configureAuth start
	I0522 18:59:53.293940  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.309101  199775 provision.go:87] duration metric: took 15.216271ms to configureAuth
	W0522 18:59:53.309122  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.309140  199775 retry.go:31] will retry after 296.144µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.310265  199775 provision.go:84] configureAuth start
	I0522 18:59:53.310331  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.325880  199775 provision.go:87] duration metric: took 15.595921ms to configureAuth
	W0522 18:59:53.325898  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.325912  199775 retry.go:31] will retry after 869.601µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.327034  199775 provision.go:84] configureAuth start
	I0522 18:59:53.327090  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.342107  199775 provision.go:87] duration metric: took 15.054569ms to configureAuth
	W0522 18:59:53.342125  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.342141  199775 retry.go:31] will retry after 1.679631ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.344324  199775 provision.go:84] configureAuth start
	I0522 18:59:53.344379  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.359510  199775 provision.go:87] duration metric: took 15.166039ms to configureAuth
	W0522 18:59:53.359530  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.359547  199775 retry.go:31] will retry after 2.000659ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.361724  199775 provision.go:84] configureAuth start
	I0522 18:59:53.361793  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.377321  199775 provision.go:87] duration metric: took 15.58099ms to configureAuth
	W0522 18:59:53.377337  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.377352  199775 retry.go:31] will retry after 2.840474ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.380526  199775 provision.go:84] configureAuth start
	I0522 18:59:53.380577  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.395455  199775 provision.go:87] duration metric: took 14.913193ms to configureAuth
	W0522 18:59:53.395487  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.395505  199775 retry.go:31] will retry after 2.345207ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.398696  199775 provision.go:84] configureAuth start
	I0522 18:59:53.398749  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.414009  199775 provision.go:87] duration metric: took 15.296264ms to configureAuth
	W0522 18:59:53.414027  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.414043  199775 retry.go:31] will retry after 6.930668ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.421219  199775 provision.go:84] configureAuth start
	I0522 18:59:53.421272  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.436178  199775 provision.go:87] duration metric: took 14.942398ms to configureAuth
	W0522 18:59:53.436192  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.436207  199775 retry.go:31] will retry after 10.301689ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.447397  199775 provision.go:84] configureAuth start
	I0522 18:59:53.447478  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.462197  199775 provision.go:87] duration metric: took 14.782569ms to configureAuth
	W0522 18:59:53.462213  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.462228  199775 retry.go:31] will retry after 17.860239ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.480384  199775 provision.go:84] configureAuth start
	I0522 18:59:53.480465  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.495396  199775 provision.go:87] duration metric: took 14.991137ms to configureAuth
	W0522 18:59:53.495412  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.495433  199775 retry.go:31] will retry after 20.664829ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.516621  199775 provision.go:84] configureAuth start
	I0522 18:59:53.516682  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.531845  199775 provision.go:87] duration metric: took 15.208135ms to configureAuth
	W0522 18:59:53.531863  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.531883  199775 retry.go:31] will retry after 43.708085ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.576086  199775 provision.go:84] configureAuth start
	I0522 18:59:53.576179  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.591863  199775 provision.go:87] duration metric: took 15.747177ms to configureAuth
	W0522 18:59:53.591880  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.591897  199775 retry.go:31] will retry after 58.013612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.650117  199775 provision.go:84] configureAuth start
	I0522 18:59:53.650196  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.666163  199775 provision.go:87] duration metric: took 16.024136ms to configureAuth
	W0522 18:59:53.666179  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.666195  199775 retry.go:31] will retry after 59.150172ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.726406  199775 provision.go:84] configureAuth start
	I0522 18:59:53.726511  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.742421  199775 provision.go:87] duration metric: took 15.990636ms to configureAuth
	W0522 18:59:53.742440  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.742457  199775 retry.go:31] will retry after 79.255542ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.822691  199775 provision.go:84] configureAuth start
	I0522 18:59:53.822764  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.838971  199775 provision.go:87] duration metric: took 16.257557ms to configureAuth
	W0522 18:59:53.838991  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.839007  199775 retry.go:31] will retry after 161.972905ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.001318  199775 provision.go:84] configureAuth start
	I0522 18:59:54.001419  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.018018  199775 provision.go:87] duration metric: took 16.659513ms to configureAuth
	W0522 18:59:54.018037  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.018056  199775 retry.go:31] will retry after 120.204263ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.139317  199775 provision.go:84] configureAuth start
	I0522 18:59:54.139416  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.156237  199775 provision.go:87] duration metric: took 16.890851ms to configureAuth
	W0522 18:59:54.156258  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.156275  199775 retry.go:31] will retry after 481.652235ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.638820  199775 provision.go:84] configureAuth start
	I0522 18:59:54.638909  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.654914  199775 provision.go:87] duration metric: took 16.066954ms to configureAuth
	W0522 18:59:54.654931  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.654947  199775 retry.go:31] will retry after 524.561472ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:55.180403  199775 provision.go:84] configureAuth start
	I0522 18:59:55.180502  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:55.196860  199775 provision.go:87] duration metric: took 16.431785ms to configureAuth
	W0522 18:59:55.196883  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:55.196900  199775 retry.go:31] will retry after 1.026684822s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:56.224017  199775 provision.go:84] configureAuth start
	I0522 18:59:56.224094  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:56.240191  199775 provision.go:87] duration metric: took 16.147804ms to configureAuth
	W0522 18:59:56.240210  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:56.240225  199775 retry.go:31] will retry after 1.24830816s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:57.489410  199775 provision.go:84] configureAuth start
	I0522 18:59:57.489485  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:57.506066  199775 provision.go:87] duration metric: took 16.629889ms to configureAuth
	W0522 18:59:57.506088  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:57.506104  199775 retry.go:31] will retry after 1.980502509s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:59.487327  199775 provision.go:84] configureAuth start
	I0522 18:59:59.487416  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:59.504374  199775 provision.go:87] duration metric: took 17.016249ms to configureAuth
	W0522 18:59:59.504396  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:59.504412  199775 retry.go:31] will retry after 2.100742547s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:01.606222  199775 provision.go:84] configureAuth start
	I0522 19:00:01.606310  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:01.624599  199775 provision.go:87] duration metric: took 18.345208ms to configureAuth
	W0522 19:00:01.624616  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:01.624643  199775 retry.go:31] will retry after 5.341834603s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:06.967947  199775 provision.go:84] configureAuth start
	I0522 19:00:06.968041  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:06.984081  199775 provision.go:87] duration metric: took 16.106563ms to configureAuth
	W0522 19:00:06.984103  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:06.984121  199775 retry.go:31] will retry after 7.535474965s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:14.521931  199775 provision.go:84] configureAuth start
	I0522 19:00:14.522007  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:14.537786  199775 provision.go:87] duration metric: took 15.830622ms to configureAuth
	W0522 19:00:14.537805  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:14.537825  199775 retry.go:31] will retry after 5.817132428s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:20.355103  199775 provision.go:84] configureAuth start
	I0522 19:00:20.355186  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:20.371230  199775 provision.go:87] duration metric: took 16.098634ms to configureAuth
	W0522 19:00:20.371250  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:20.371288  199775 retry.go:31] will retry after 16.531933092s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:36.904939  199775 provision.go:84] configureAuth start
	I0522 19:00:36.905020  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:36.921551  199775 provision.go:87] duration metric: took 16.579992ms to configureAuth
	W0522 19:00:36.921570  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:36.921593  199775 retry.go:31] will retry after 19.248116686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:56.170224  199775 provision.go:84] configureAuth start
	I0522 19:00:56.170298  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:56.186614  199775 provision.go:87] duration metric: took 16.363908ms to configureAuth
	W0522 19:00:56.186636  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:56.186651  199775 retry.go:31] will retry after 38.419670127s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.608922  199775 provision.go:84] configureAuth start
	I0522 19:01:34.609067  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:01:34.626132  199775 provision.go:87] duration metric: took 17.167716ms to configureAuth
	W0522 19:01:34.626153  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.626184  199775 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.626193  199775 machine.go:97] duration metric: took 1m41.801276248s to provisionDockerMachine
	I0522 19:01:34.626259  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 19:01:34.626294  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 19:01:34.641200  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32942 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 19:01:34.723587  199775 command_runner.go:130] > 27%
	I0522 19:01:34.723897  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 19:01:34.727710  199775 command_runner.go:130] > 213G
	I0522 19:01:34.727922  199775 fix.go:56] duration metric: took 1m41.923292489s for fixHost
	I0522 19:01:34.727946  199775 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m41.923338944s
	W0522 19:01:34.728039  199775 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	* Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.729983  199775 out.go:177] 
	W0522 19:01:34.731623  199775 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 19:01:34.731635  199775 out.go:239] * 
	* 
	W0522 19:01:34.732492  199775 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 19:01:34.734003  199775 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-linux-amd64 start -p multinode-737786 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect multinode-737786
helpers_test.go:235: (dbg) docker inspect multinode-737786:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b",
	        "Created": "2024-05-22T18:32:23.801109111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 200052,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-05-22T18:59:35.072350452Z",
	            "FinishedAt": "2024-05-22T18:59:34.127787675Z"
	        },
	        "Image": "sha256:cb99fd41fe8a25604c2e3534b4c012b22ed4bc29522c7f33230caec3b2c64334",
	        "ResolvConfPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hostname",
	        "HostsPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/hosts",
	        "LogPath": "/var/lib/docker/containers/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b-json.log",
	        "Name": "/multinode-737786",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "multinode-737786:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "multinode-737786",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 2306867200,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4613734400,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae-init/diff:/var/lib/docker/overlay2/709a71ed13f27a0ebdbcf3488f11950b5a04338f8f86750702ebc331da0ae8e4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/merged",
	                "UpperDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/diff",
	                "WorkDir": "/var/lib/docker/overlay2/62f070e2659368c3ca3ebb616d2d6aa574155b2dfd5da24251e080f4551f93ae/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "multinode-737786",
	                "Source": "/var/lib/docker/volumes/multinode-737786/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "multinode-737786",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "multinode-737786",
	                "name.minikube.sigs.k8s.io": "multinode-737786",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "fef69d9334729fcbb789f2f371087e8b255b632f0c7f2d7972fe036919721b54",
	            "SandboxKey": "/var/run/docker/netns/fef69d933472",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32937"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32936"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32933"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32935"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32934"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "multinode-737786": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "NetworkID": "b174e10eedee312049d00080bb6166ba641282f05c912d7cd781278a531f5de6",
	                    "EndpointID": "67dc5ecee34462c00ca7c9b5d09ffd94342dde79c4946307816ef79512e23440",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DriverOpts": null,
	                    "DNSNames": [
	                        "multinode-737786",
	                        "b522c5b4d434"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-737786 -n multinode-737786
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 logs -n 25
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m03 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp testdata/cp-test.txt                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786 sudo cat                                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt                       | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n                                                                 | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | multinode-737786-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-737786 ssh -n multinode-737786-m02 sudo cat                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	|         | /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-737786 node stop m03                                                          | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC | 22 May 24 18:52 UTC |
	| node    | multinode-737786 node start                                                             | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:52 UTC |                     |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	| stop    | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC | 22 May 24 18:55 UTC |
	| start   | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:55 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-737786                                                                | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:57 UTC |                     |
	| node    | multinode-737786 node delete                                                            | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:57 UTC |                     |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-737786 stop                                                                   | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:59 UTC | 22 May 24 18:59 UTC |
	| start   | -p multinode-737786                                                                     | multinode-737786 | jenkins | v1.33.1 | 22 May 24 18:59 UTC |                     |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	|         | --driver=docker                                                                         |                  |         |         |                     |                     |
	|         | --container-runtime=docker                                                              |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 18:59:34
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 18:59:34.657325  199775 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:59:34.657554  199775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.657562  199775 out.go:304] Setting ErrFile to fd 2...
	I0522 18:59:34.657566  199775 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:59:34.657720  199775 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:59:34.658203  199775 out.go:298] Setting JSON to false
	I0522 18:59:34.659121  199775 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":6119,"bootTime":1716398256,"procs":222,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 18:59:34.659169  199775 start.go:139] virtualization: kvm guest
	I0522 18:59:34.661548  199775 out.go:177] * [multinode-737786] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 18:59:34.663012  199775 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 18:59:34.664309  199775 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 18:59:34.662986  199775 notify.go:220] Checking for updates...
	I0522 18:59:34.666671  199775 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:34.667892  199775 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 18:59:34.669173  199775 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 18:59:34.670352  199775 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 18:59:34.671839  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:34.672242  199775 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 18:59:34.692956  199775 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 18:59:34.693064  199775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:59:34.736414  199775 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:59:34.727578342 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:59:34.736517  199775 docker.go:295] overlay module found
	I0522 18:59:34.739217  199775 out.go:177] * Using the docker driver based on existing profile
	I0522 18:59:34.740504  199775 start.go:297] selected driver: docker
	I0522 18:59:34.740520  199775 start.go:901] validating driver "docker" against &{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:34.740614  199775 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 18:59:34.740679  199775 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:59:34.783935  199775 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:31 OomKillDisable:true NGoroutines:43 SystemTime:2024-05-22 18:59:34.77502677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86_
64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErr
ors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:59:34.784481  199775 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:59:34.784544  199775 cni.go:84] Creating CNI manager for ""
	I0522 18:59:34.784556  199775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:59:34.784592  199775 start.go:340] cluster config:
	{Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:34.787213  199775 out.go:177] * Starting "multinode-737786" primary control-plane node in "multinode-737786" cluster
	I0522 18:59:34.788345  199775 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:59:34.789681  199775 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:59:34.790759  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:34.790785  199775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:59:34.790797  199775 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 18:59:34.790805  199775 cache.go:56] Caching tarball of preloaded images
	I0522 18:59:34.790877  199775 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:59:34.790889  199775 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:59:34.790995  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:34.805492  199775 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:59:34.805534  199775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:59:34.805553  199775 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:59:34.805602  199775 start.go:360] acquireMachinesLock for multinode-737786: {Name:mk00c2e59d2bf46ff3cb87a2140a50c1d66aa291 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:34.805679  199775 start.go:364] duration metric: took 53.78µs to acquireMachinesLock for "multinode-737786"
	I0522 18:59:34.805703  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:34.805717  199775 fix.go:54] fixHost starting: 
	I0522 18:59:34.805959  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:34.821314  199775 fix.go:112] recreateIfNeeded on multinode-737786: state=Stopped err=<nil>
	W0522 18:59:34.821342  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:34.823171  199775 out.go:177] * Restarting existing docker container for "multinode-737786" ...
	I0522 18:59:34.824579  199775 cli_runner.go:164] Run: docker start multinode-737786
	I0522 18:59:35.079040  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:35.096715  199775 kic.go:430] container "multinode-737786" state is running.
	I0522 18:59:35.097082  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:35.113368  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:35.113586  199775 machine.go:94] provisionDockerMachine start ...
	I0522 18:59:35.113653  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:35.130868  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:35.131109  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:35.131128  199775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:59:35.131715  199775 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50388->127.0.0.1:32937: read: connection reset by peer
	I0522 18:59:38.242318  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:59:38.242343  199775 ubuntu.go:169] provisioning hostname "multinode-737786"
	I0522 18:59:38.242404  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.258417  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.258580  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.258592  199775 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786 && echo "multinode-737786" | sudo tee /etc/hostname
	I0522 18:59:38.380649  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786
	
	I0522 18:59:38.380723  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.396574  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.396746  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.396762  199775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:59:38.507150  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:38.507179  199775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:59:38.507193  199775 ubuntu.go:177] setting up certificates
	I0522 18:59:38.507220  199775 provision.go:84] configureAuth start
	I0522 18:59:38.507285  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:38.524413  199775 provision.go:143] copyHostCerts
	I0522 18:59:38.524446  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:59:38.524474  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem, removing ...
	I0522 18:59:38.524488  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem
	I0522 18:59:38.524565  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/ca.pem (1078 bytes)
	I0522 18:59:38.524641  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:59:38.524659  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem, removing ...
	I0522 18:59:38.524663  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem
	I0522 18:59:38.524690  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/cert.pem (1123 bytes)
	I0522 18:59:38.524730  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:59:38.524746  199775 exec_runner.go:144] found /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem, removing ...
	I0522 18:59:38.524753  199775 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem
	I0522 18:59:38.524780  199775 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18943-9771/.minikube/key.pem (1679 bytes)
	I0522 18:59:38.524822  199775 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem org=jenkins.multinode-737786 san=[127.0.0.1 192.168.67.2 localhost minikube multinode-737786]
	I0522 18:59:38.661121  199775 provision.go:177] copyRemoteCerts
	I0522 18:59:38.661175  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0522 18:59:38.661206  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.676916  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:38.759116  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0522 18:59:38.759181  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0522 18:59:38.779102  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0522 18:59:38.779146  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0522 18:59:38.799080  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0522 18:59:38.799127  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0522 18:59:38.819078  199775 provision.go:87] duration metric: took 311.841874ms to configureAuth
	I0522 18:59:38.819110  199775 ubuntu.go:193] setting minikube options for container-runtime
	I0522 18:59:38.819264  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:38.819329  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.835148  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.835384  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.835400  199775 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0522 18:59:38.947293  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0522 18:59:38.947317  199775 ubuntu.go:71] root file system type: overlay
	I0522 18:59:38.947414  199775 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0522 18:59:38.947480  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:38.963004  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:38.963177  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:38.963236  199775 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0522 18:59:39.085599  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0522 18:59:39.085656  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.101979  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:39.102166  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32937 <nil> <nil>}
	I0522 18:59:39.102183  199775 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0522 18:59:39.219901  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:39.219926  199775 machine.go:97] duration metric: took 4.106321609s to provisionDockerMachine
	I0522 18:59:39.219936  199775 start.go:293] postStartSetup for "multinode-737786" (driver="docker")
	I0522 18:59:39.219951  199775 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0522 18:59:39.220008  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0522 18:59:39.220045  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.236332  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.319285  199775 ssh_runner.go:195] Run: cat /etc/os-release
	I0522 18:59:39.322035  199775 command_runner.go:130] > PRETTY_NAME="Ubuntu 22.04.4 LTS"
	I0522 18:59:39.322050  199775 command_runner.go:130] > NAME="Ubuntu"
	I0522 18:59:39.322057  199775 command_runner.go:130] > VERSION_ID="22.04"
	I0522 18:59:39.322073  199775 command_runner.go:130] > VERSION="22.04.4 LTS (Jammy Jellyfish)"
	I0522 18:59:39.322080  199775 command_runner.go:130] > VERSION_CODENAME=jammy
	I0522 18:59:39.322086  199775 command_runner.go:130] > ID=ubuntu
	I0522 18:59:39.322091  199775 command_runner.go:130] > ID_LIKE=debian
	I0522 18:59:39.322097  199775 command_runner.go:130] > HOME_URL="https://www.ubuntu.com/"
	I0522 18:59:39.322102  199775 command_runner.go:130] > SUPPORT_URL="https://help.ubuntu.com/"
	I0522 18:59:39.322108  199775 command_runner.go:130] > BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
	I0522 18:59:39.322114  199775 command_runner.go:130] > PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
	I0522 18:59:39.322120  199775 command_runner.go:130] > UBUNTU_CODENAME=jammy
	I0522 18:59:39.322169  199775 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0522 18:59:39.322215  199775 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0522 18:59:39.322228  199775 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0522 18:59:39.322236  199775 info.go:137] Remote host: Ubuntu 22.04.4 LTS
	I0522 18:59:39.322251  199775 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/addons for local assets ...
	I0522 18:59:39.322307  199775 filesync.go:126] Scanning /home/jenkins/minikube-integration/18943-9771/.minikube/files for local assets ...
	I0522 18:59:39.322403  199775 filesync.go:149] local asset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> 166682.pem in /etc/ssl/certs
	I0522 18:59:39.322416  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /etc/ssl/certs/166682.pem
	I0522 18:59:39.322524  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0522 18:59:39.329906  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:59:39.350136  199775 start.go:296] duration metric: took 130.188186ms for postStartSetup
	I0522 18:59:39.350206  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:59:39.350258  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.365900  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.447364  199775 command_runner.go:130] > 27%!
	(MISSING)I0522 18:59:39.447616  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:59:39.451318  199775 command_runner.go:130] > 213G
	I0522 18:59:39.451499  199775 fix.go:56] duration metric: took 4.64578222s for fixHost
	I0522 18:59:39.451522  199775 start.go:83] releasing machines lock for "multinode-737786", held for 4.645827696s
	I0522 18:59:39.451586  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:59:39.466833  199775 ssh_runner.go:195] Run: cat /version.json
	I0522 18:59:39.466877  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.466958  199775 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0522 18:59:39.467022  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:39.483558  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.484707  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:39.629310  199775 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0522 18:59:39.629370  199775 command_runner.go:130] > {"iso_version": "v1.33.1-1715594774-18869", "kicbase_version": "v0.0.44-1715707529-18887", "minikube_version": "v1.33.1", "commit": "c807e9fa51afbeb5e05a1f9101150532cb8aabaa"}
	I0522 18:59:39.629493  199775 ssh_runner.go:195] Run: systemctl --version
	I0522 18:59:39.633438  199775 command_runner.go:130] > systemd 249 (249.11-0ubuntu3.12)
	I0522 18:59:39.633463  199775 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I0522 18:59:39.633523  199775 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0522 18:59:39.637125  199775 command_runner.go:130] >   File: /etc/cni/net.d/200-loopback.conf
	I0522 18:59:39.637144  199775 command_runner.go:130] >   Size: 78        	Blocks: 8          IO Block: 4096   regular file
	I0522 18:59:39.637150  199775 command_runner.go:130] > Device: 37h/55d	Inode: 1307236     Links: 1
	I0522 18:59:39.637156  199775 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:59:39.637162  199775 command_runner.go:130] > Access: 2024-05-22 18:59:35.468422925 +0000
	I0522 18:59:39.637166  199775 command_runner.go:130] > Modify: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637171  199775 command_runner.go:130] > Change: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637176  199775 command_runner.go:130] >  Birth: 2024-05-22 18:55:34.983035774 +0000
	I0522 18:59:39.637344  199775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0522 18:59:39.652801  199775 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0522 18:59:39.652866  199775 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0522 18:59:39.660640  199775 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0522 18:59:39.660665  199775 start.go:494] detecting cgroup driver to use...
	I0522 18:59:39.660698  199775 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:59:39.660801  199775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:59:39.674202  199775 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0522 18:59:39.674271  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0522 18:59:39.682295  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0522 18:59:39.690505  199775 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0522 18:59:39.690543  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0522 18:59:39.698420  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:59:39.706192  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0522 18:59:39.713967  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0522 18:59:39.721725  199775 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0522 18:59:39.729168  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0522 18:59:39.736990  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0522 18:59:39.745046  199775 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0522 18:59:39.752971  199775 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0522 18:59:39.759103  199775 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0522 18:59:39.759772  199775 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0522 18:59:39.766405  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:39.842699  199775 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0522 18:59:39.913692  199775 start.go:494] detecting cgroup driver to use...
	I0522 18:59:39.913745  199775 detect.go:196] detected "cgroupfs" cgroup driver on host os
	I0522 18:59:39.913793  199775 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0522 18:59:39.923110  199775 command_runner.go:130] > # /lib/systemd/system/docker.service
	I0522 18:59:39.923130  199775 command_runner.go:130] > [Unit]
	I0522 18:59:39.923140  199775 command_runner.go:130] > Description=Docker Application Container Engine
	I0522 18:59:39.923148  199775 command_runner.go:130] > Documentation=https://docs.docker.com
	I0522 18:59:39.923154  199775 command_runner.go:130] > BindsTo=containerd.service
	I0522 18:59:39.923166  199775 command_runner.go:130] > After=network-online.target firewalld.service containerd.service
	I0522 18:59:39.923172  199775 command_runner.go:130] > Wants=network-online.target
	I0522 18:59:39.923182  199775 command_runner.go:130] > Requires=docker.socket
	I0522 18:59:39.923191  199775 command_runner.go:130] > StartLimitBurst=3
	I0522 18:59:39.923202  199775 command_runner.go:130] > StartLimitIntervalSec=60
	I0522 18:59:39.923210  199775 command_runner.go:130] > [Service]
	I0522 18:59:39.923216  199775 command_runner.go:130] > Type=notify
	I0522 18:59:39.923226  199775 command_runner.go:130] > Restart=on-failure
	I0522 18:59:39.923238  199775 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0522 18:59:39.923254  199775 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0522 18:59:39.923283  199775 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0522 18:59:39.923300  199775 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0522 18:59:39.923311  199775 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0522 18:59:39.923324  199775 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0522 18:59:39.923340  199775 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0522 18:59:39.923358  199775 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0522 18:59:39.923373  199775 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0522 18:59:39.923382  199775 command_runner.go:130] > ExecStart=
	I0522 18:59:39.923403  199775 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I0522 18:59:39.923415  199775 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0522 18:59:39.923427  199775 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0522 18:59:39.923440  199775 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0522 18:59:39.923450  199775 command_runner.go:130] > LimitNOFILE=infinity
	I0522 18:59:39.923460  199775 command_runner.go:130] > LimitNPROC=infinity
	I0522 18:59:39.923468  199775 command_runner.go:130] > LimitCORE=infinity
	I0522 18:59:39.923479  199775 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0522 18:59:39.923492  199775 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0522 18:59:39.923501  199775 command_runner.go:130] > TasksMax=infinity
	I0522 18:59:39.923510  199775 command_runner.go:130] > TimeoutStartSec=0
	I0522 18:59:39.923520  199775 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0522 18:59:39.923529  199775 command_runner.go:130] > Delegate=yes
	I0522 18:59:39.923540  199775 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0522 18:59:39.923551  199775 command_runner.go:130] > KillMode=process
	I0522 18:59:39.923564  199775 command_runner.go:130] > [Install]
	I0522 18:59:39.923574  199775 command_runner.go:130] > WantedBy=multi-user.target
	I0522 18:59:39.924050  199775 cruntime.go:279] skipping containerd shutdown because we are bound to it
	I0522 18:59:39.924104  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0522 18:59:39.935210  199775 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0522 18:59:39.950982  199775 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0522 18:59:39.951036  199775 ssh_runner.go:195] Run: which cri-dockerd
	I0522 18:59:39.953987  199775 command_runner.go:130] > /usr/bin/cri-dockerd
	I0522 18:59:39.954086  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0522 18:59:39.961541  199775 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0522 18:59:39.978646  199775 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0522 18:59:40.087121  199775 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0522 18:59:40.186524  199775 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0522 18:59:40.186634  199775 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0522 18:59:40.202646  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.287896  199775 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0522 18:59:40.586647  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0522 18:59:40.596511  199775 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0522 18:59:40.606653  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:59:40.615578  199775 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0522 18:59:40.689807  199775 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0522 18:59:40.760812  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.832348  199775 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0522 18:59:40.843985  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0522 18:59:40.853262  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:40.919086  199775 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0522 18:59:40.978856  199775 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0522 18:59:40.978933  199775 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0522 18:59:40.982421  199775 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0522 18:59:40.982447  199775 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0522 18:59:40.982457  199775 command_runner.go:130] > Device: 40h/64d	Inode: 218         Links: 1
	I0522 18:59:40.982466  199775 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  999/  docker)
	I0522 18:59:40.982479  199775 command_runner.go:130] > Access: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982488  199775 command_runner.go:130] > Modify: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982495  199775 command_runner.go:130] > Change: 2024-05-22 18:59:40.924817423 +0000
	I0522 18:59:40.982504  199775 command_runner.go:130] >  Birth: -
	I0522 18:59:40.982534  199775 start.go:562] Will wait 60s for crictl version
	I0522 18:59:40.982578  199775 ssh_runner.go:195] Run: which crictl
	I0522 18:59:40.985607  199775 command_runner.go:130] > /usr/bin/crictl
	I0522 18:59:40.985671  199775 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0522 18:59:41.013467  199775 command_runner.go:130] > Version:  0.1.0
	I0522 18:59:41.013490  199775 command_runner.go:130] > RuntimeName:  docker
	I0522 18:59:41.013494  199775 command_runner.go:130] > RuntimeVersion:  26.1.2
	I0522 18:59:41.013499  199775 command_runner.go:130] > RuntimeApiVersion:  v1
	I0522 18:59:41.015520  199775 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.1.2
	RuntimeApiVersion:  v1
	I0522 18:59:41.015567  199775 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:59:41.036936  199775 command_runner.go:130] > 26.1.2
	I0522 18:59:41.037003  199775 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0522 18:59:41.057023  199775 command_runner.go:130] > 26.1.2
	I0522 18:59:41.060284  199775 out.go:204] * Preparing Kubernetes v1.30.1 on Docker 26.1.2 ...
	I0522 18:59:41.060360  199775 cli_runner.go:164] Run: docker network inspect multinode-737786 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0522 18:59:41.075514  199775 ssh_runner.go:195] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0522 18:59:41.078871  199775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:59:41.088506  199775 kubeadm.go:877] updating cluster {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0522 18:59:41.088614  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:41.088651  199775 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:59:41.104540  199775 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:59:41.104560  199775 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:59:41.104567  199775 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:59:41.104574  199775 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:59:41.104580  199775 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:59:41.104588  199775 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:59:41.104597  199775 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:59:41.104605  199775 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:59:41.104618  199775 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:41.104632  199775 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:59:41.105493  199775 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:59:41.105511  199775 docker.go:615] Images already preloaded, skipping extraction
	I0522 18:59:41.105571  199775 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0522 18:59:41.121068  199775 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.1
	I0522 18:59:41.121088  199775 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.1
	I0522 18:59:41.121095  199775 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.1
	I0522 18:59:41.121103  199775 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.1
	I0522 18:59:41.121109  199775 command_runner.go:130] > kindest/kindnetd:v20240513-cd2ac642
	I0522 18:59:41.121118  199775 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0522 18:59:41.121136  199775 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0522 18:59:41.121147  199775 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0522 18:59:41.121160  199775 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:41.121171  199775 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0522 18:59:41.122034  199775 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.1
	registry.k8s.io/kube-controller-manager:v1.30.1
	registry.k8s.io/kube-scheduler:v1.30.1
	registry.k8s.io/kube-proxy:v1.30.1
	kindest/kindnetd:v20240513-cd2ac642
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0522 18:59:41.122058  199775 cache_images.go:84] Images are preloaded, skipping loading
	I0522 18:59:41.122068  199775 kubeadm.go:928] updating node { 192.168.67.2 8443 v1.30.1 docker true true} ...
	I0522 18:59:41.122191  199775 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-737786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0522 18:59:41.122249  199775 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0522 18:59:41.164777  199775 command_runner.go:130] > cgroupfs
	I0522 18:59:41.166168  199775 cni.go:84] Creating CNI manager for ""
	I0522 18:59:41.166182  199775 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0522 18:59:41.166202  199775 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0522 18:59:41.166231  199775 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.30.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-737786 NodeName:multinode-737786 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0522 18:59:41.166360  199775 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-737786"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0522 18:59:41.166412  199775 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.1
	I0522 18:59:41.173501  199775 command_runner.go:130] > kubeadm
	I0522 18:59:41.173516  199775 command_runner.go:130] > kubectl
	I0522 18:59:41.173521  199775 command_runner.go:130] > kubelet
	I0522 18:59:41.174225  199775 binaries.go:44] Found k8s binaries, skipping transfer
	I0522 18:59:41.174277  199775 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0522 18:59:41.181641  199775 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0522 18:59:41.196590  199775 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0522 18:59:41.211515  199775 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0522 18:59:41.226228  199775 ssh_runner.go:195] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0522 18:59:41.229089  199775 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0522 18:59:41.238030  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:41.308903  199775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:59:41.320466  199775 certs.go:68] Setting up /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786 for IP: 192.168.67.2
	I0522 18:59:41.320483  199775 certs.go:194] generating shared ca certs ...
	I0522 18:59:41.320502  199775 certs.go:226] acquiring lock for ca certs: {Name:mkb1ad99e7529ca8084f0e374cc6ddf767aece14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:41.320646  199775 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key
	I0522 18:59:41.320698  199775 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key
	I0522 18:59:41.320711  199775 certs.go:256] generating profile certs ...
	I0522 18:59:41.320806  199775 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key
	I0522 18:59:41.320870  199775 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key.650c2b43
	I0522 18:59:41.320924  199775 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key
	I0522 18:59:41.320936  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0522 18:59:41.320952  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0522 18:59:41.320973  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0522 18:59:41.320987  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0522 18:59:41.321000  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0522 18:59:41.321014  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0522 18:59:41.321029  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0522 18:59:41.321047  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0522 18:59:41.321101  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem (1338 bytes)
	W0522 18:59:41.321137  199775 certs.go:480] ignoring /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668_empty.pem, impossibly tiny 0 bytes
	I0522 18:59:41.321150  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem (1679 bytes)
	I0522 18:59:41.321182  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem (1078 bytes)
	I0522 18:59:41.321210  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem (1123 bytes)
	I0522 18:59:41.321267  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem (1679 bytes)
	I0522 18:59:41.321326  199775 certs.go:484] found cert: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem (1708 bytes)
	I0522 18:59:41.321362  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.321379  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem -> /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.321399  199775 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem -> /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.322191  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0522 18:59:41.343282  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0522 18:59:41.364358  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0522 18:59:41.389503  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0522 18:59:41.466649  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0522 18:59:41.548665  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0522 18:59:41.576097  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0522 18:59:41.663618  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0522 18:59:41.686294  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0522 18:59:41.708614  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/certs/16668.pem --> /usr/share/ca-certificates/16668.pem (1338 bytes)
	I0522 18:59:41.755450  199775 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/ssl/certs/166682.pem --> /usr/share/ca-certificates/166682.pem (1708 bytes)
	I0522 18:59:41.777856  199775 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0522 18:59:41.792889  199775 ssh_runner.go:195] Run: openssl version
	I0522 18:59:41.797633  199775 command_runner.go:130] > OpenSSL 3.0.2 15 Mar 2022 (Library: OpenSSL 3.0.2 15 Mar 2022)
	I0522 18:59:41.797704  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/166682.pem && ln -fs /usr/share/ca-certificates/166682.pem /etc/ssl/certs/166682.pem"
	I0522 18:59:41.805732  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808853  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808893  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May 22 17:49 /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.808935  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/166682.pem
	I0522 18:59:41.814721  199775 command_runner.go:130] > 3ec20f2e
	I0522 18:59:41.814945  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/166682.pem /etc/ssl/certs/3ec20f2e.0"
	I0522 18:59:41.822355  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0522 18:59:41.830234  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833113  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833152  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May 22 17:45 /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.833193  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0522 18:59:41.838909  199775 command_runner.go:130] > b5213941
	I0522 18:59:41.838964  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0522 18:59:41.846130  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16668.pem && ln -fs /usr/share/ca-certificates/16668.pem /etc/ssl/certs/16668.pem"
	I0522 18:59:41.854027  199775 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.856957  199775 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.856978  199775 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May 22 17:49 /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.857008  199775 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16668.pem
	I0522 18:59:41.863854  199775 command_runner.go:130] > 51391683
	I0522 18:59:41.864155  199775 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16668.pem /etc/ssl/certs/51391683.0"
	I0522 18:59:41.872963  199775 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:59:41.876441  199775 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0522 18:59:41.876466  199775 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0522 18:59:41.876476  199775 command_runner.go:130] > Device: 801h/2049d	Inode: 1307017     Links: 1
	I0522 18:59:41.876485  199775 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0522 18:59:41.876495  199775 command_runner.go:130] > Access: 2024-05-22 18:55:37.083187616 +0000
	I0522 18:59:41.876502  199775 command_runner.go:130] > Modify: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876513  199775 command_runner.go:130] > Change: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876522  199775 command_runner.go:130] >  Birth: 2024-05-22 18:32:29.570873454 +0000
	I0522 18:59:41.876589  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0522 18:59:41.884263  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.884568  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0522 18:59:41.891332  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.891516  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0522 18:59:41.897217  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.897376  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0522 18:59:41.903657  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.903908  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0522 18:59:41.909914  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.910175  199775 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0522 18:59:41.917460  199775 command_runner.go:130] > Certificate will not expire
	I0522 18:59:41.917518  199775 kubeadm.go:391] StartCluster: {Name:multinode-737786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:multinode-737786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP: Port:0 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 18:59:41.917656  199775 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0522 18:59:41.960301  199775 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0522 18:59:41.969598  199775 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0522 18:59:41.969631  199775 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0522 18:59:41.969641  199775 command_runner.go:130] > /var/lib/minikube/etcd:
	I0522 18:59:41.969648  199775 command_runner.go:130] > member
	W0522 18:59:41.970788  199775 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0522 18:59:41.970807  199775 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0522 18:59:41.970813  199775 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0522 18:59:41.970855  199775 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0522 18:59:41.982721  199775 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0522 18:59:41.983078  199775 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-737786" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:41.983181  199775 kubeconfig.go:62] /home/jenkins/minikube-integration/18943-9771/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-737786" cluster setting kubeconfig missing "multinode-737786" context setting]
	I0522 18:59:41.983550  199775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:41.984046  199775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:41.984249  199775 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:59:41.984661  199775 cert_rotation.go:137] Starting client certificate rotation controller
	I0522 18:59:41.984900  199775 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0522 18:59:42.054179  199775 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.67.2
	I0522 18:59:42.054210  199775 kubeadm.go:591] duration metric: took 83.392356ms to restartPrimaryControlPlane
	I0522 18:59:42.054219  199775 kubeadm.go:393] duration metric: took 136.705239ms to StartCluster
	I0522 18:59:42.054237  199775 settings.go:142] acquiring lock: {Name:mk1bbb63a81703f4c38b97b29a878017a54a2114 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:42.054314  199775 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:42.054846  199775 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/kubeconfig: {Name:mkb679b50174ade9b53ad7d806acd171ac61db6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 18:59:42.055084  199775 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0522 18:59:42.057350  199775 out.go:177] * Verifying Kubernetes components...
	I0522 18:59:42.055335  199775 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0522 18:59:42.055409  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:42.057426  199775 addons.go:69] Setting storage-provisioner=true in profile "multinode-737786"
	I0522 18:59:42.057464  199775 addons.go:234] Setting addon storage-provisioner=true in "multinode-737786"
	W0522 18:59:42.058679  199775 addons.go:243] addon storage-provisioner should already be in state true
	I0522 18:59:42.057469  199775 addons.go:69] Setting default-storageclass=true in profile "multinode-737786"
	I0522 18:59:42.058657  199775 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0522 18:59:42.058798  199775 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:42.058805  199775 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-737786"
	I0522 18:59:42.059102  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.059303  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.084954  199775 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0522 18:59:42.083978  199775 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 18:59:42.085298  199775 kapi.go:59] client config for multinode-737786: &rest.Config{Host:"https://192.168.67.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/client.key", CAFile:"/home/jenkins/minikube-integration/18943-9771/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProt
os:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1cf6e80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0522 18:59:42.086409  199775 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:42.086425  199775 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0522 18:59:42.086473  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:42.086612  199775 addons.go:234] Setting addon default-storageclass=true in "multinode-737786"
	W0522 18:59:42.086623  199775 addons.go:243] addon default-storageclass should already be in state true
	I0522 18:59:42.086647  199775 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:59:42.087068  199775 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:59:42.108554  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:42.108724  199775 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:42.108740  199775 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0522 18:59:42.108801  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:59:42.124067  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32937 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:59:42.349892  199775 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0522 18:59:42.365487  199775 node_ready.go:35] waiting up to 6m0s for node "multinode-737786" to be "Ready" ...
	I0522 18:59:42.365636  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:42.365650  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:42.365661  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:42.365667  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:42.365926  199775 round_trippers.go:574] Response Status:  in 0 milliseconds
	I0522 18:59:42.365955  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:42.368759  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:42.445191  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:42.747644  199775 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:59:42.747689  199775 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747716  199775 retry.go:31] will retry after 271.82269ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747779  199775 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0522 18:59:42.747804  199775 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.747817  199775 retry.go:31] will retry after 351.337067ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0522 18:59:42.865881  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:42.865907  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:42.865918  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:42.865925  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:43.020621  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0522 18:59:43.099952  199775 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0522 18:59:44.953522  199775 round_trippers.go:574] Response Status: 200 OK in 2087 milliseconds
	I0522 18:59:44.953553  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.953563  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.953568  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.953573  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.953577  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.953582  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.953585  199775 round_trippers.go:580]     Audit-Id: 00349735-99e2-451d-a0ec-1bc8cad4692e
	I0522 18:59:44.954791  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:44.955615  199775 node_ready.go:49] node "multinode-737786" has status "Ready":"True"
	I0522 18:59:44.955635  199775 node_ready.go:38] duration metric: took 2.590096009s for node "multinode-737786" to be "Ready" ...
	I0522 18:59:44.955648  199775 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:59:44.955720  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:44.955726  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.955736  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.955742  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:44.962955  199775 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0522 18:59:44.962991  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.963001  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.963009  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.963013  199775 round_trippers.go:580]     Audit-Id: 95ec849a-2081-4480-8152-bae6335ebbe1
	I0522 18:59:44.963017  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.963022  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.963026  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.963637  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1783"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1637","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 59082 chars]
	I0522 18:59:44.968126  199775 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:44.968231  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-jhsz9
	I0522 18:59:44.968241  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.968251  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.968260  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:44.969901  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:44.969916  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:44.969925  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:44.969930  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:44.969936  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:44 GMT
	I0522 18:59:44.969940  199775 round_trippers.go:580]     Audit-Id: 790ee2f6-bbfb-4442-ad45-71a07163d279
	I0522 18:59:44.969943  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:44.969947  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:44.970163  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1637","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6448 chars]
	I0522 18:59:44.970661  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:44.970679  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:44.970689  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:44.970694  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.048873  199775 round_trippers.go:574] Response Status: 200 OK in 78 milliseconds
	I0522 18:59:45.048896  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.048925  199775 round_trippers.go:580]     Audit-Id: ff89e442-6b26-4bf7-b902-4b2e2a86a546
	I0522 18:59:45.048930  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.048934  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.048938  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 
	I0522 18:59:45.048943  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 
	I0522 18:59:45.048948  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.049638  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.050090  199775 pod_ready.go:92] pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.050135  199775 pod_ready.go:81] duration metric: took 81.977298ms for pod "coredns-7db6d8ff4d-jhsz9" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.050156  199775 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.050240  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-737786
	I0522 18:59:45.050253  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.050263  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.050268  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.054731  199775 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0522 18:59:45.054756  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.054781  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.054788  199775 round_trippers.go:580]     Audit-Id: 44b32557-8db3-40f9-9a12-a58962945a26
	I0522 18:59:45.054797  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.054804  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.054816  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.054826  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.055152  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-737786","namespace":"kube-system","uid":"6bb7cf66-bbe8-4383-8c36-b49c1be34a69","resourceVersion":"1612","creationTimestamp":"2024-05-22T18:32:36Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.67.2:2379","kubernetes.io/config.hash":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.mirror":"4602f65e0dbd5e302570c7ddba56faa5","kubernetes.io/config.seen":"2024-05-22T18:32:32.217268427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0522 18:59:45.055827  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.055873  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.055885  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.055897  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.057594  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.057613  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.057622  199775 round_trippers.go:580]     Audit-Id: 66d081a8-8f2f-4907-a6ce-8ffec0da4bff
	I0522 18:59:45.057627  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.057649  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.057657  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.057660  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.057665  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.057774  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.058153  199775 pod_ready.go:92] pod "etcd-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.058177  199775 pod_ready.go:81] duration metric: took 8.009151ms for pod "etcd-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.058193  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.058281  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-737786
	I0522 18:59:45.058292  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.058301  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.058305  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.060506  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:45.060537  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.060545  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.060551  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.060555  199775 round_trippers.go:580]     Audit-Id: 872a911f-72f0-4610-9933-ab1beea1fab1
	I0522 18:59:45.060561  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.060564  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.060569  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.060744  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-737786","namespace":"kube-system","uid":"f2c4828e-8746-4281-9cba-98573dcfa2ff","resourceVersion":"1621","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.67.2:8443","kubernetes.io/config.hash":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.mirror":"e26311d8d9ac20af7e4c2c1c5c36c4c2","kubernetes.io/config.seen":"2024-05-22T18:32:37.634740806Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8740 chars]
	I0522 18:59:45.061219  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.061244  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.061267  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.061276  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.062712  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.062729  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.062739  199775 round_trippers.go:580]     Audit-Id: 38b34a7c-a076-49e9-8c7c-66c4e27de82c
	I0522 18:59:45.062745  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.062749  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.062755  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.062760  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.062764  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.062913  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.063163  199775 pod_ready.go:92] pod "kube-apiserver-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.063175  199775 pod_ready.go:81] duration metric: took 4.975505ms for pod "kube-apiserver-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.063184  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.063226  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-737786
	I0522 18:59:45.063234  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.063239  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.063243  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.064913  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.064928  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.064937  199775 round_trippers.go:580]     Audit-Id: 78545d9c-25a0-4cd2-b7d2-dd3dc3bf4092
	I0522 18:59:45.064943  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.064947  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.064951  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.064965  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.064970  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.065121  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-737786","namespace":"kube-system","uid":"ee1e5fdf-44fd-434f-b7ee-bc951867af65","resourceVersion":"1617","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.mirror":"b578c4ffcc18da717db4fe4330c036af","kubernetes.io/config.seen":"2024-05-22T18:32:37.634732959Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 8313 chars]
	I0522 18:59:45.065638  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.065652  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.065662  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.065670  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.066993  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.067008  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.067016  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.067021  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.067026  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.067030  199775 round_trippers.go:580]     Audit-Id: 43a0c948-088c-41ca-afbb-eaa3b36def3b
	I0522 18:59:45.067034  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.067045  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.067153  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.067518  199775 pod_ready.go:92] pod "kube-controller-manager-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.067533  199775 pod_ready.go:81] duration metric: took 4.342707ms for pod "kube-controller-manager-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.067541  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.067580  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kqtgj
	I0522 18:59:45.067584  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.067591  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.067594  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.069029  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.069042  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.069048  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.069051  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.069054  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.069056  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.069064  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.069068  199775 round_trippers.go:580]     Audit-Id: eefd608b-03a6-4bf1-b6c7-a2cbe257b0a1
	I0522 18:59:45.069245  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kqtgj","generateName":"kube-proxy-","namespace":"kube-system","uid":"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75","resourceVersion":"1607","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"controller-revision-hash":"5dbf89796d","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"33b24b0a-e17b-496d-a5e6-72c18dd0158a","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33b24b0a-e17b-496d-a5e6-72c18dd0158a\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0522 18:59:45.150867  199775 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0522 18:59:45.150908  199775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.130257791s)
	I0522 18:59:45.151029  199775 round_trippers.go:463] GET https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses
	I0522 18:59:45.151039  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.151051  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.151056  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.152654  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.152684  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.152692  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.152697  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.152701  199775 round_trippers.go:580]     Content-Length: 1274
	I0522 18:59:45.152705  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.152709  199775 round_trippers.go:580]     Audit-Id: 2402c5d5-4b0d-46a9-a5dd-acb12a0778a3
	I0522 18:59:45.152716  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.152720  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.152750  199775 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"1787"},"items":[{"metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubern
etes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is [truncated 250 chars]
	I0522 18:59:45.153265  199775 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:59:45.153319  199775 round_trippers.go:463] PUT https://192.168.67.2:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0522 18:59:45.153347  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.153361  199775 round_trippers.go:473]     Content-Type: application/json
	I0522 18:59:45.153369  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.153373  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.155795  199775 request.go:629] Waited for 86.18921ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.155861  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.155871  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.155883  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.155891  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.156116  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:45.156135  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.156145  199775 round_trippers.go:580]     Audit-Id: ab1b87fb-f688-48fd-ac50-a321eefa04e3
	I0522 18:59:45.156152  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.156156  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.156160  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.156164  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.156169  199775 round_trippers.go:580]     Content-Length: 1220
	I0522 18:59:45.156174  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.156202  199775 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"d6d4bbb5-9510-4932-86fe-83930be479f7","resourceVersion":"357","creationTimestamp":"2024-05-22T18:32:52Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-22T18:32:52Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0522 18:59:45.157451  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.157473  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.157482  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.157488  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.157494  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.157499  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.157503  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.157507  199775 round_trippers.go:580]     Audit-Id: a51128b5-04d3-4748-abea-ec42a0969a69
	I0522 18:59:45.157620  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1530","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.158103  199775 pod_ready.go:92] pod "kube-proxy-kqtgj" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.158129  199775 pod_ready.go:81] duration metric: took 90.579489ms for pod "kube-proxy-kqtgj" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.158142  199775 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.323712  199775 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0522 18:59:45.337452  199775 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0522 18:59:45.353065  199775 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:59:45.356261  199775 request.go:629] Waited for 198.062578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:59:45.356335  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-737786
	I0522 18:59:45.356345  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.356354  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.356360  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.358209  199775 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0522 18:59:45.358229  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.358236  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.358240  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.358243  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.358245  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.358248  199775 round_trippers.go:580]     Audit-Id: 42473482-cac4-43d8-add8-01891a6ba4a3
	I0522 18:59:45.358251  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.358428  199775 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-737786","namespace":"kube-system","uid":"34621b7d-073b-48b0-bc2e-d4af1a694d3c","resourceVersion":"1614","creationTimestamp":"2024-05-22T18:32:37Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.mirror":"f66196d0ed0ffd1f075eb4c44595acc9","kubernetes.io/config.seen":"2024-05-22T18:32:37.634737420Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:37Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0522 18:59:45.366817  199775 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0522 18:59:45.433333  199775 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0522 18:59:45.547435  199775 command_runner.go:130] > pod/storage-provisioner configured
	I0522 18:59:45.552147  199775 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.452155221s)
	I0522 18:59:45.554994  199775 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I0522 18:59:45.556295  199775 addons.go:505] duration metric: took 3.500954665s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0522 18:59:45.556235  199775 request.go:629] Waited for 197.339011ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.556431  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes/multinode-737786
	I0522 18:59:45.556456  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:45.556475  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:45.556487  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:45.562601  199775 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0522 18:59:45.562625  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:45.562634  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:45 GMT
	I0522 18:59:45.562640  199775 round_trippers.go:580]     Audit-Id: 9b368b52-aa14-4c21-9228-188cb027a4b9
	I0522 18:59:45.562644  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:45.562648  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:45.562653  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:45.562656  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:45.562850  199775 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1788","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-22T18:32:35Z","fieldsType":"FieldsV1","f [truncated 5210 chars]
	I0522 18:59:45.563264  199775 pod_ready.go:92] pod "kube-scheduler-multinode-737786" in "kube-system" namespace has status "Ready":"True"
	I0522 18:59:45.563305  199775 pod_ready.go:81] duration metric: took 405.154046ms for pod "kube-scheduler-multinode-737786" in "kube-system" namespace to be "Ready" ...
	I0522 18:59:45.563318  199775 pod_ready.go:38] duration metric: took 607.658988ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0522 18:59:45.563342  199775 api_server.go:52] waiting for apiserver process to appear ...
	I0522 18:59:45.563411  199775 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:59:45.577417  199775 command_runner.go:130] > 2003
	I0522 18:59:45.577457  199775 api_server.go:72] duration metric: took 3.522320494s to wait for apiserver process to appear ...
	I0522 18:59:45.577467  199775 api_server.go:88] waiting for apiserver healthz status ...
	I0522 18:59:45.577483  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:45.580888  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:59:45.580921  199775 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:59:46.078506  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:46.144573  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0522 18:59:46.144658  199775 api_server.go:103] status: https://192.168.67.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0522 18:59:46.577749  199775 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:59:46.581996  199775 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:59:46.582072  199775 round_trippers.go:463] GET https://192.168.67.2:8443/version
	I0522 18:59:46.582080  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.582087  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.582091  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.582847  199775 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0522 18:59:46.582863  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.582870  199775 round_trippers.go:580]     Audit-Id: 53752c00-8dc0-4bfb-8fe1-45ec40c3d2fb
	I0522 18:59:46.582874  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.582889  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.582897  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.582900  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.582904  199775 round_trippers.go:580]     Content-Length: 263
	I0522 18:59:46.582906  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.582920  199775 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.1",
	  "gitCommit": "6911225c3f747e1cd9d109c305436d08b668f086",
	  "gitTreeState": "clean",
	  "buildDate": "2024-05-14T10:42:02Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0522 18:59:46.583003  199775 api_server.go:141] control plane version: v1.30.1
	I0522 18:59:46.583025  199775 api_server.go:131] duration metric: took 1.005552775s to wait for apiserver health ...
	I0522 18:59:46.583040  199775 system_pods.go:43] waiting for kube-system pods to appear ...
	I0522 18:59:46.583094  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:46.583101  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.583118  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.583137  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.652541  199775 round_trippers.go:574] Response Status: 200 OK in 69 milliseconds
	I0522 18:59:46.652564  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.652571  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.652575  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.652578  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.652581  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.652621  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.652627  199775 round_trippers.go:580]     Audit-Id: ee158221-6aef-4f5f-9f4c-d04d9cd99a78
	I0522 18:59:46.654310  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1793","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60493 chars]
	I0522 18:59:46.656705  199775 system_pods.go:59] 8 kube-system pods found
	I0522 18:59:46.656775  199775 system_pods.go:61] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:59:46.656799  199775 system_pods.go:61] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:59:46.656812  199775 system_pods.go:61] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:59:46.656826  199775 system_pods.go:61] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:59:46.656839  199775 system_pods.go:61] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:59:46.656853  199775 system_pods.go:61] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:59:46.656864  199775 system_pods.go:61] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:59:46.656879  199775 system_pods.go:61] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:59:46.656887  199775 system_pods.go:74] duration metric: took 73.839475ms to wait for pod list to return data ...
	I0522 18:59:46.656909  199775 default_sa.go:34] waiting for default service account to be created ...
	I0522 18:59:46.657013  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/default/serviceaccounts
	I0522 18:59:46.657031  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.657041  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.657052  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.659079  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:46.659105  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.659116  199775 round_trippers.go:580]     Audit-Id: 9e4ded67-5094-4348-a2b4-ec3e2afc8e53
	I0522 18:59:46.659123  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.659132  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.659136  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.659148  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.659152  199775 round_trippers.go:580]     Content-Length: 262
	I0522 18:59:46.659167  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.659187  199775 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"de27f301-3e6f-4391-ad6c-a63ded62bb92","resourceVersion":"305","creationTimestamp":"2024-05-22T18:32:51Z"}}]}
	I0522 18:59:46.659421  199775 default_sa.go:45] found service account: "default"
	I0522 18:59:46.659446  199775 default_sa.go:55] duration metric: took 2.524633ms for default service account to be created ...
	I0522 18:59:46.659457  199775 system_pods.go:116] waiting for k8s-apps to be running ...
	I0522 18:59:46.659517  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/namespaces/kube-system/pods
	I0522 18:59:46.659532  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.659542  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.659548  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.662463  199775 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0522 18:59:46.662480  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.662487  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.662491  199775 round_trippers.go:580]     Audit-Id: 80886460-303e-4011-adae-70833530b5b7
	I0522 18:59:46.662494  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.662497  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.662500  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.662503  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.663455  199775 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1858"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-jhsz9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"0123bfa5-2086-4863-9436-8a0b88e1d95a","resourceVersion":"1793","creationTimestamp":"2024-05-22T18:32:51Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"5ac24674-7cbd-449f-8603-102d57baae3b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-22T18:32:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5ac24674-7cbd-449f-8603-102d57baae3b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 60493 chars]
	I0522 18:59:46.665338  199775 system_pods.go:86] 8 kube-system pods found
	I0522 18:59:46.665361  199775 system_pods.go:89] "coredns-7db6d8ff4d-jhsz9" [0123bfa5-2086-4863-9436-8a0b88e1d95a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0522 18:59:46.665368  199775 system_pods.go:89] "etcd-multinode-737786" [6bb7cf66-bbe8-4383-8c36-b49c1be34a69] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0522 18:59:46.665375  199775 system_pods.go:89] "kindnet-qpfbl" [e454b0cd-e618-4268-8882-69d2a4544917] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0522 18:59:46.665392  199775 system_pods.go:89] "kube-apiserver-multinode-737786" [f2c4828e-8746-4281-9cba-98573dcfa2ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0522 18:59:46.665398  199775 system_pods.go:89] "kube-controller-manager-multinode-737786" [ee1e5fdf-44fd-434f-b7ee-bc951867af65] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0522 18:59:46.665414  199775 system_pods.go:89] "kube-proxy-kqtgj" [b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0522 18:59:46.665425  199775 system_pods.go:89] "kube-scheduler-multinode-737786" [34621b7d-073b-48b0-bc2e-d4af1a694d3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0522 18:59:46.665433  199775 system_pods.go:89] "storage-provisioner" [5d953629-c86b-47be-84da-baa3bdf24d2e] Running
	I0522 18:59:46.665440  199775 system_pods.go:126] duration metric: took 5.977627ms to wait for k8s-apps to be running ...
	I0522 18:59:46.665446  199775 system_svc.go:44] waiting for kubelet service to be running ....
	I0522 18:59:46.665481  199775 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:59:46.680943  199775 system_svc.go:56] duration metric: took 15.466416ms WaitForService to wait for kubelet
	I0522 18:59:46.681027  199775 kubeadm.go:576] duration metric: took 4.625886917s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0522 18:59:46.681071  199775 node_conditions.go:102] verifying NodePressure condition ...
	I0522 18:59:46.681210  199775 round_trippers.go:463] GET https://192.168.67.2:8443/api/v1/nodes
	I0522 18:59:46.681238  199775 round_trippers.go:469] Request Headers:
	I0522 18:59:46.681269  199775 round_trippers.go:473]     Accept: application/json, */*
	I0522 18:59:46.681284  199775 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0522 18:59:46.684896  199775 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0522 18:59:46.684982  199775 round_trippers.go:577] Response Headers:
	I0522 18:59:46.685004  199775 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: eb662600-1059-447c-8d94-7d617635e0e9
	I0522 18:59:46.685037  199775 round_trippers.go:580]     Date: Wed, 22 May 2024 18:59:46 GMT
	I0522 18:59:46.685062  199775 round_trippers.go:580]     Audit-Id: 5b29536d-555e-447a-b5de-22d0f90e97ba
	I0522 18:59:46.685075  199775 round_trippers.go:580]     Cache-Control: no-cache, private
	I0522 18:59:46.685087  199775 round_trippers.go:580]     Content-Type: application/json
	I0522 18:59:46.685098  199775 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 231acc2d-4090-42ee-b5f2-5664be511057
	I0522 18:59:46.685273  199775 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1860"},"items":[{"metadata":{"name":"multinode-737786","uid":"e0a6b8ca-2b82-4b84-a05f-2732ee204f38","resourceVersion":"1788","creationTimestamp":"2024-05-22T18:32:35Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-737786","kubernetes.io/os":"linux","minikube.k8s.io/commit":"461168c3991b3796899fb93cd381299efb7493c9","minikube.k8s.io/name":"multinode-737786","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_22T18_32_38_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 5264 chars]
	I0522 18:59:46.685771  199775 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0522 18:59:46.685837  199775 node_conditions.go:123] node cpu capacity is 8
	I0522 18:59:46.685862  199775 node_conditions.go:105] duration metric: took 4.759995ms to run NodePressure ...
	I0522 18:59:46.685884  199775 start.go:240] waiting for startup goroutines ...
	I0522 18:59:46.685918  199775 start.go:245] waiting for cluster config update ...
	I0522 18:59:46.685946  199775 start.go:254] writing updated cluster config ...
	I0522 18:59:46.689398  199775 out.go:177] 
	I0522 18:59:46.690789  199775 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:59:46.690929  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:46.692402  199775 out.go:177] * Starting "multinode-737786-m02" worker node in "multinode-737786" cluster
	I0522 18:59:46.693438  199775 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 18:59:46.694568  199775 out.go:177] * Pulling base image v0.0.44-1715707529-18887 ...
	I0522 18:59:46.695615  199775 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 18:59:46.695636  199775 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 18:59:46.695648  199775 cache.go:56] Caching tarball of preloaded images
	I0522 18:59:46.695725  199775 preload.go:173] Found /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0522 18:59:46.695732  199775 cache.go:59] Finished verifying existence of preloaded tar for v1.30.1 on docker
	I0522 18:59:46.695807  199775 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/multinode-737786/config.json ...
	I0522 18:59:46.710779  199775 image.go:83] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon, skipping pull
	I0522 18:59:46.710810  199775 cache.go:144] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in daemon, skipping load
	I0522 18:59:46.710821  199775 cache.go:194] Successfully downloaded all kic artifacts
	I0522 18:59:46.710843  199775 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:46.710893  199775 start.go:364] duration metric: took 34.69µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:59:46.710909  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:46.710913  199775 fix.go:54] fixHost starting: m02
	I0522 18:59:46.711144  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:46.726425  199775 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Stopped err=<nil>
	W0522 18:59:46.726450  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:46.728022  199775 out.go:177] * Restarting existing docker container for "multinode-737786-m02" ...
	I0522 18:59:46.729121  199775 cli_runner.go:164] Run: docker start multinode-737786-m02
	I0522 18:59:47.097738  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:47.114644  199775 kic.go:430] container "multinode-737786-m02" state is running.
	I0522 18:59:47.114997  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:47.131986  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:59:47.132026  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:47.148365  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32942 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	W0522 18:59:47.149133  199775 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:32804->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.149165  199775 retry.go:31] will retry after 266.09716ms: ssh: handshake failed: read tcp 127.0.0.1:32804->127.0.0.1:32942: read: connection reset by peer
	W0522 18:59:47.415745  199775 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 127.0.0.1:32818->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.415789  199775 retry.go:31] will retry after 300.769199ms: ssh: handshake failed: read tcp 127.0.0.1:32818->127.0.0.1:32942: read: connection reset by peer
	I0522 18:59:47.799305  199775 command_runner.go:130] > 27%!
	(MISSING)I0522 18:59:47.799558  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 18:59:47.803128  199775 command_runner.go:130] > 213G
	I0522 18:59:47.803504  199775 fix.go:56] duration metric: took 1.092584564s for fixHost
	I0522 18:59:47.803525  199775 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1.09262176s
	W0522 18:59:47.803543  199775 start.go:713] error starting host: container addresses should have 2 values, got 1 values: []
	W0522 18:59:47.803607  199775 out.go:239] ! StartHost failed, but will try again: container addresses should have 2 values, got 1 values: []
	I0522 18:59:47.803619  199775 start.go:728] Will try again in 5 seconds ...
	I0522 18:59:52.804451  199775 start.go:360] acquireMachinesLock for multinode-737786-m02: {Name:mke1c901314110cc43eda3aeb29a4097c072b861 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0522 18:59:52.804590  199775 start.go:364] duration metric: took 107.122µs to acquireMachinesLock for "multinode-737786-m02"
	I0522 18:59:52.804617  199775 start.go:96] Skipping create...Using existing machine configuration
	I0522 18:59:52.804626  199775 fix.go:54] fixHost starting: m02
	I0522 18:59:52.804865  199775 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:59:52.821071  199775 fix.go:112] recreateIfNeeded on multinode-737786-m02: state=Running err=<nil>
	W0522 18:59:52.821099  199775 fix.go:138] unexpected machine state, will restart: <nil>
	I0522 18:59:52.823752  199775 out.go:177] * Updating the running docker "multinode-737786-m02" container ...
	I0522 18:59:52.824906  199775 machine.go:94] provisionDockerMachine start ...
	I0522 18:59:52.824977  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:52.840560  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:52.840715  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:52.840727  199775 main.go:141] libmachine: About to run SSH command:
	hostname
	I0522 18:59:52.954372  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:59:52.954401  199775 ubuntu.go:169] provisioning hostname "multinode-737786-m02"
	I0522 18:59:52.954462  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:52.970241  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:52.970424  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:52.970443  199775 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-737786-m02 && echo "multinode-737786-m02" | sudo tee /etc/hostname
	I0522 18:59:53.093444  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-737786-m02
	
	I0522 18:59:53.093511  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 18:59:53.109566  199775 main.go:141] libmachine: Using SSH client type: native
	I0522 18:59:53.109727  199775 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d800] 0x830560 <nil>  [] 0s} 127.0.0.1 32942 <nil> <nil>}
	I0522 18:59:53.109744  199775 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-737786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-737786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-737786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0522 18:59:53.227092  199775 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0522 18:59:53.227117  199775 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/18943-9771/.minikube CaCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18943-9771/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18943-9771/.minikube}
	I0522 18:59:53.227130  199775 ubuntu.go:177] setting up certificates
	I0522 18:59:53.227142  199775 provision.go:84] configureAuth start
	I0522 18:59:53.227193  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.242688  199775 provision.go:87] duration metric: took 15.538812ms to configureAuth
	W0522 18:59:53.242709  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.242725  199775 retry.go:31] will retry after 79.472µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.243838  199775 provision.go:84] configureAuth start
	I0522 18:59:53.243893  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.259086  199775 provision.go:87] duration metric: took 15.229784ms to configureAuth
	W0522 18:59:53.259105  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.259122  199775 retry.go:31] will retry after 200.248µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.260190  199775 provision.go:84] configureAuth start
	I0522 18:59:53.260251  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.276300  199775 provision.go:87] duration metric: took 16.090219ms to configureAuth
	W0522 18:59:53.276322  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.276342  199775 retry.go:31] will retry after 158.856µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.277452  199775 provision.go:84] configureAuth start
	I0522 18:59:53.277509  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.292719  199775 provision.go:87] duration metric: took 15.24981ms to configureAuth
	W0522 18:59:53.292736  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.292752  199775 retry.go:31] will retry after 210.436µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.293867  199775 provision.go:84] configureAuth start
	I0522 18:59:53.293940  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.309101  199775 provision.go:87] duration metric: took 15.216271ms to configureAuth
	W0522 18:59:53.309122  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.309140  199775 retry.go:31] will retry after 296.144µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.310265  199775 provision.go:84] configureAuth start
	I0522 18:59:53.310331  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.325880  199775 provision.go:87] duration metric: took 15.595921ms to configureAuth
	W0522 18:59:53.325898  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.325912  199775 retry.go:31] will retry after 869.601µs: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.327034  199775 provision.go:84] configureAuth start
	I0522 18:59:53.327090  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.342107  199775 provision.go:87] duration metric: took 15.054569ms to configureAuth
	W0522 18:59:53.342125  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.342141  199775 retry.go:31] will retry after 1.679631ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.344324  199775 provision.go:84] configureAuth start
	I0522 18:59:53.344379  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.359510  199775 provision.go:87] duration metric: took 15.166039ms to configureAuth
	W0522 18:59:53.359530  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.359547  199775 retry.go:31] will retry after 2.000659ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.361724  199775 provision.go:84] configureAuth start
	I0522 18:59:53.361793  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.377321  199775 provision.go:87] duration metric: took 15.58099ms to configureAuth
	W0522 18:59:53.377337  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.377352  199775 retry.go:31] will retry after 2.840474ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.380526  199775 provision.go:84] configureAuth start
	I0522 18:59:53.380577  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.395455  199775 provision.go:87] duration metric: took 14.913193ms to configureAuth
	W0522 18:59:53.395487  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.395505  199775 retry.go:31] will retry after 2.345207ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.398696  199775 provision.go:84] configureAuth start
	I0522 18:59:53.398749  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.414009  199775 provision.go:87] duration metric: took 15.296264ms to configureAuth
	W0522 18:59:53.414027  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.414043  199775 retry.go:31] will retry after 6.930668ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.421219  199775 provision.go:84] configureAuth start
	I0522 18:59:53.421272  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.436178  199775 provision.go:87] duration metric: took 14.942398ms to configureAuth
	W0522 18:59:53.436192  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.436207  199775 retry.go:31] will retry after 10.301689ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.447397  199775 provision.go:84] configureAuth start
	I0522 18:59:53.447478  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.462197  199775 provision.go:87] duration metric: took 14.782569ms to configureAuth
	W0522 18:59:53.462213  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.462228  199775 retry.go:31] will retry after 17.860239ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.480384  199775 provision.go:84] configureAuth start
	I0522 18:59:53.480465  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.495396  199775 provision.go:87] duration metric: took 14.991137ms to configureAuth
	W0522 18:59:53.495412  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.495433  199775 retry.go:31] will retry after 20.664829ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.516621  199775 provision.go:84] configureAuth start
	I0522 18:59:53.516682  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.531845  199775 provision.go:87] duration metric: took 15.208135ms to configureAuth
	W0522 18:59:53.531863  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.531883  199775 retry.go:31] will retry after 43.708085ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.576086  199775 provision.go:84] configureAuth start
	I0522 18:59:53.576179  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.591863  199775 provision.go:87] duration metric: took 15.747177ms to configureAuth
	W0522 18:59:53.591880  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.591897  199775 retry.go:31] will retry after 58.013612ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.650117  199775 provision.go:84] configureAuth start
	I0522 18:59:53.650196  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.666163  199775 provision.go:87] duration metric: took 16.024136ms to configureAuth
	W0522 18:59:53.666179  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.666195  199775 retry.go:31] will retry after 59.150172ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.726406  199775 provision.go:84] configureAuth start
	I0522 18:59:53.726511  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.742421  199775 provision.go:87] duration metric: took 15.990636ms to configureAuth
	W0522 18:59:53.742440  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.742457  199775 retry.go:31] will retry after 79.255542ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.822691  199775 provision.go:84] configureAuth start
	I0522 18:59:53.822764  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:53.838971  199775 provision.go:87] duration metric: took 16.257557ms to configureAuth
	W0522 18:59:53.838991  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:53.839007  199775 retry.go:31] will retry after 161.972905ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.001318  199775 provision.go:84] configureAuth start
	I0522 18:59:54.001419  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.018018  199775 provision.go:87] duration metric: took 16.659513ms to configureAuth
	W0522 18:59:54.018037  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.018056  199775 retry.go:31] will retry after 120.204263ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.139317  199775 provision.go:84] configureAuth start
	I0522 18:59:54.139416  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.156237  199775 provision.go:87] duration metric: took 16.890851ms to configureAuth
	W0522 18:59:54.156258  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.156275  199775 retry.go:31] will retry after 481.652235ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.638820  199775 provision.go:84] configureAuth start
	I0522 18:59:54.638909  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:54.654914  199775 provision.go:87] duration metric: took 16.066954ms to configureAuth
	W0522 18:59:54.654931  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:54.654947  199775 retry.go:31] will retry after 524.561472ms: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:55.180403  199775 provision.go:84] configureAuth start
	I0522 18:59:55.180502  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:55.196860  199775 provision.go:87] duration metric: took 16.431785ms to configureAuth
	W0522 18:59:55.196883  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:55.196900  199775 retry.go:31] will retry after 1.026684822s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:56.224017  199775 provision.go:84] configureAuth start
	I0522 18:59:56.224094  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:56.240191  199775 provision.go:87] duration metric: took 16.147804ms to configureAuth
	W0522 18:59:56.240210  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:56.240225  199775 retry.go:31] will retry after 1.24830816s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:57.489410  199775 provision.go:84] configureAuth start
	I0522 18:59:57.489485  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:57.506066  199775 provision.go:87] duration metric: took 16.629889ms to configureAuth
	W0522 18:59:57.506088  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:57.506104  199775 retry.go:31] will retry after 1.980502509s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:59.487327  199775 provision.go:84] configureAuth start
	I0522 18:59:59.487416  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 18:59:59.504374  199775 provision.go:87] duration metric: took 17.016249ms to configureAuth
	W0522 18:59:59.504396  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 18:59:59.504412  199775 retry.go:31] will retry after 2.100742547s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:01.606222  199775 provision.go:84] configureAuth start
	I0522 19:00:01.606310  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:01.624599  199775 provision.go:87] duration metric: took 18.345208ms to configureAuth
	W0522 19:00:01.624616  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:01.624643  199775 retry.go:31] will retry after 5.341834603s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:06.967947  199775 provision.go:84] configureAuth start
	I0522 19:00:06.968041  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:06.984081  199775 provision.go:87] duration metric: took 16.106563ms to configureAuth
	W0522 19:00:06.984103  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:06.984121  199775 retry.go:31] will retry after 7.535474965s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:14.521931  199775 provision.go:84] configureAuth start
	I0522 19:00:14.522007  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:14.537786  199775 provision.go:87] duration metric: took 15.830622ms to configureAuth
	W0522 19:00:14.537805  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:14.537825  199775 retry.go:31] will retry after 5.817132428s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:20.355103  199775 provision.go:84] configureAuth start
	I0522 19:00:20.355186  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:20.371230  199775 provision.go:87] duration metric: took 16.098634ms to configureAuth
	W0522 19:00:20.371250  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:20.371288  199775 retry.go:31] will retry after 16.531933092s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:36.904939  199775 provision.go:84] configureAuth start
	I0522 19:00:36.905020  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:36.921551  199775 provision.go:87] duration metric: took 16.579992ms to configureAuth
	W0522 19:00:36.921570  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:36.921593  199775 retry.go:31] will retry after 19.248116686s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:56.170224  199775 provision.go:84] configureAuth start
	I0522 19:00:56.170298  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:00:56.186614  199775 provision.go:87] duration metric: took 16.363908ms to configureAuth
	W0522 19:00:56.186636  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:00:56.186651  199775 retry.go:31] will retry after 38.419670127s: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.608922  199775 provision.go:84] configureAuth start
	I0522 19:01:34.609067  199775 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	I0522 19:01:34.626132  199775 provision.go:87] duration metric: took 17.167716ms to configureAuth
	W0522 19:01:34.626153  199775 ubuntu.go:180] configureAuth failed: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.626184  199775 ubuntu.go:189] Error configuring auth during provisioning Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.626193  199775 machine.go:97] duration metric: took 1m41.801276248s to provisionDockerMachine
	I0522 19:01:34.626259  199775 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 19:01:34.626294  199775 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786-m02
	I0522 19:01:34.641200  199775 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32942 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786-m02/id_rsa Username:docker}
	I0522 19:01:34.723587  199775 command_runner.go:130] > 27%!
	(MISSING)I0522 19:01:34.723897  199775 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0522 19:01:34.727710  199775 command_runner.go:130] > 213G
	I0522 19:01:34.727922  199775 fix.go:56] duration metric: took 1m41.923292489s for fixHost
	I0522 19:01:34.727946  199775 start.go:83] releasing machines lock for "multinode-737786-m02", held for 1m41.923338944s
	W0522 19:01:34.728039  199775 out.go:239] * Failed to start docker container. Running "minikube delete -p multinode-737786" may fix it: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	I0522 19:01:34.729983  199775 out.go:177] 
	W0522 19:01:34.731623  199775 out.go:239] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: Temporary Error: error getting ip during provisioning: container addresses should have 2 values, got 1 values: []
	W0522 19:01:34.731635  199775 out.go:239] * 
	W0522 19:01:34.732492  199775 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0522 19:01:34.734003  199775 out.go:177] 
	
	
	==> Docker <==
	May 22 18:59:40 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:40Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	May 22 18:59:40 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:40Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	May 22 18:59:40 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:40Z" level=info msg="Start cri-dockerd grpc backend"
	May 22 18:59:40 multinode-737786 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-7zbr8_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4\""
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-7zbr8_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"7fefb8ab9046a93fa90099406fe22d3ab5b99d1e81ed91b35c2e7790f7cd2c3c\""
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1\""
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"ada6e7b25c53306480ec3268f02ae3c0a31843cb50792174aefef87684d072cd\""
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"1d92837fd4e76b3940b513386b4537e60ec327f94a8fd3e6a1239115d2266fdf\". Proceed without further sandbox information."
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"df5064710014068ec6e2be583b4634e08f642ea3e283ac01c4442141654e1ed8\". Proceed without further sandbox information."
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"65627abb3612282d6558ffb1aafad214a42aaed131116b1b8f31f678c74ef0f4\". Proceed without further sandbox information."
	May 22 18:59:41 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:41Z" level=info msg="Both sandbox container and checkpoint could not be found with id \"4f2b347dd216a58bc9c88f683631484d66c1337fda1386d98d45876825741536\". Proceed without further sandbox information."
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/73e7029823e69510db284e1a2d9944688d6d63821bb3133521fec8acc062f019/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f976896cd4ef6cced0bf65f8ad146afab0b4f231cf2dda27a00ecb2481446a7d/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options trust-ad ndots:0 edns0]"
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b66bafe0bc18a2b95b4190d0aadcb5b3398a13c6971dc1100b58f06cc8b24003/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/973afb14402b2cf9fed66181381a57ea1a02c12e1a72616fed2712eb07063d3d/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-7zbr8_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"a6b52bbcc47a83fe266e6f891da30d8acaee28a3ce90bbbfa7209a66a33a7fc4\""
	May 22 18:59:42 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:42Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-jhsz9_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"635f4e9d5f8f1c8d7e841846d31b2e5cf268c887e750af271ef32caeb22d24a1\""
	May 22 18:59:45 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:45Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 22 18:59:46 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/791fd98a936b1eea862345c8ad291c2c2a79c5df22d31e4211362ad3b8f49bdc/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:59:46 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/ceec27a8063d65a466b0fef71db1c9771e265ff911a3b402b56487cf3ea342fb/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 18:59:46 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/49ece9078636649013fcd07eaef9f26ad9ccd024d2bbb8163e4c9a5936f92719/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:0 edns0 trust-ad]"
	May 22 18:59:46 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d51534e7f5ad8482121e40a42d9ccec0c35a4ae6d8a4e3ff70c7291f23f2961d/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options ndots:5]"
	May 22 18:59:46 multinode-737786 cri-dockerd[1251]: time="2024-05-22T18:59:46Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/3ade5c6026a9902b0790131f6feea45d6128d4eed74dc9b44e4340897da13d50/resolv.conf as [nameserver 192.168.67.1 search europe-west4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal options edns0 trust-ad ndots:0]"
	May 22 19:00:16 multinode-737786 dockerd[1002]: time="2024-05-22T19:00:16.463544332Z" level=info msg="ignoring event" container=0ecd788446e0396802ea7948c22331d27ffcd081334543786c2a5603aa0066b3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	12ad1ebd390ee       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   ceec27a8063d6       storage-provisioner
	7e32e894af02e       cbb01a7bd410d       About a minute ago   Running             coredns                   3                   3ade5c6026a99       coredns-7db6d8ff4d-jhsz9
	04635bdb24374       8c811b4aec35f       About a minute ago   Running             busybox                   2                   d51534e7f5ad8       busybox-fc5497c4f-7zbr8
	c1c1478968352       ac1c61439df46       About a minute ago   Running             kindnet-cni               2                   49ece90786366       kindnet-qpfbl
	ea070c29df0b2       747097150317f       About a minute ago   Running             kube-proxy                2                   791fd98a936b1       kube-proxy-kqtgj
	0ecd788446e03       6e38f40d628db       About a minute ago   Exited              storage-provisioner       3                   ceec27a8063d6       storage-provisioner
	fed7036e80a06       25a1387cdab82       About a minute ago   Running             kube-controller-manager   2                   973afb14402b2       kube-controller-manager-multinode-737786
	121cc1e00b0c0       a52dc94f0a912       About a minute ago   Running             kube-scheduler            2                   b66bafe0bc18a       kube-scheduler-multinode-737786
	02b808dacb4f4       91be940803172       About a minute ago   Running             kube-apiserver            2                   f976896cd4ef6       kube-apiserver-multinode-737786
	da66f04d56153       3861cfcd7c04c       About a minute ago   Running             etcd                      2                   73e7029823e69       etcd-multinode-737786
	513df62eec3d7       cbb01a7bd410d       5 minutes ago        Exited              coredns                   2                   635f4e9d5f8f1       coredns-7db6d8ff4d-jhsz9
	ca4e4fb6fa63f       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   a6b52bbcc47a8       busybox-fc5497c4f-7zbr8
	43dd6bc557dd6       ac1c61439df46       5 minutes ago        Exited              kindnet-cni               1                   a6b2b3d758240       kindnet-qpfbl
	9e66337e0a3b0       747097150317f       5 minutes ago        Exited              kube-proxy                1                   fb1d360112edd       kube-proxy-kqtgj
	f57ae12003854       25a1387cdab82       5 minutes ago        Exited              kube-controller-manager   1                   fd5e5467e4321       kube-controller-manager-multinode-737786
	495d862fbc889       91be940803172       5 minutes ago        Exited              kube-apiserver            1                   7b6f81208c49b       kube-apiserver-multinode-737786
	94cf43c9c1855       a52dc94f0a912       5 minutes ago        Exited              kube-scheduler            1                   74a359ee9dc76       kube-scheduler-multinode-737786
	eefaf11c384e1       3861cfcd7c04c       5 minutes ago        Exited              etcd                      1                   2558846c3bbbb       etcd-multinode-737786
	
	
	==> coredns [513df62eec3d] <==
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59311 - 41845 "HINFO IN 6854891090202188984.7957026021720121455. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009982044s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[445986774]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[445986774]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[445986774]: [30.001125532s] [30.001125532s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1234663045]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[1234663045]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (18:56:11.806)
	Trace[1234663045]: [30.001264536s] [30.001264536s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[889784802]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:55:41.805) (total time: 30001ms):
	Trace[889784802]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (18:56:11.806)
	Trace[889784802]: [30.001227605s] [30.001227605s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [7e32e894af02] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = bfa258e3dfcd8004ab6c7d60772766a595ee209e49c62e6ae56bd911a145318b327e0c73bbccac30667047dafea6a8c1149027cea85d58a2246677e8ec1caab2
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:40368 - 15272 "HINFO IN 4301484859762446568.3283777757335701580. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.014397886s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[134378416]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:59:46.766) (total time: 30000ms):
	Trace[134378416]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:00:16.767)
	Trace[134378416]: [30.000891057s] [30.000891057s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[82141616]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:59:46.767) (total time: 30000ms):
	Trace[82141616]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:00:16.767)
	Trace[82141616]: [30.00027498s] [30.00027498s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[135565988]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-May-2024 18:59:46.766) (total time: 30000ms):
	Trace[135565988]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (19:00:16.767)
	Trace[135565988]: [30.000879618s] [30.000879618s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               multinode-737786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-737786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=461168c3991b3796899fb93cd381299efb7493c9
	                    minikube.k8s.io/name=multinode-737786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_22T18_32_38_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 22 May 2024 18:32:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-737786
	  AcquireTime:     <unset>
	  RenewTime:       Wed, 22 May 2024 19:01:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 22 May 2024 18:59:45 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 22 May 2024 18:59:45 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 22 May 2024 18:59:45 +0000   Wed, 22 May 2024 18:32:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 22 May 2024 18:59:45 +0000   Wed, 22 May 2024 18:32:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    multinode-737786
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859356Ki
	  pods:               110
	System Info:
	  Machine ID:                 3045ae8289fb40a08ac17460e6ab577d
	  System UUID:                6e83f646-e235-46a9-a358-7eec8a5b1ae0
	  Boot ID:                    e5b4465e-51c8-4026-9dab-c7060cf83b22
	  Kernel Version:             5.15.0-1060-gcp
	  OS Image:                   Ubuntu 22.04.4 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.1.2
	  Kubelet Version:            v1.30.1
	  Kube-Proxy Version:         v1.30.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-7zbr8                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         25m
	  kube-system                 coredns-7db6d8ff4d-jhsz9                    100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     28m
	  kube-system                 etcd-multinode-737786                       100m (1%!)(MISSING)     0 (0%!)(MISSING)      100Mi (0%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-qpfbl                               100m (1%!)(MISSING)     100m (1%!)(MISSING)   50Mi (0%!)(MISSING)        50Mi (0%!)(MISSING)      28m
	  kube-system                 kube-apiserver-multinode-737786             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-multinode-737786    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-kqtgj                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-multinode-737786             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%!)(MISSING)  100m (1%!)(MISSING)
	  memory             220Mi (0%!)(MISSING)  220Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-1Gi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 108s                   kube-proxy       
	  Normal  Starting                 5m53s                  kube-proxy       
	  Normal  Starting                 28m                    kube-proxy       
	  Normal  Starting                 28m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m                    kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                    kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                    kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m                    node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	  Normal  NodeHasSufficientMemory  5m59s (x8 over 5m59s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    5m59s (x8 over 5m59s)  kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m59s (x7 over 5m59s)  kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m43s                  node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	  Normal  Starting                 114s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  114s (x8 over 114s)    kubelet          Node multinode-737786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s (x8 over 114s)    kubelet          Node multinode-737786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s (x7 over 114s)    kubelet          Node multinode-737786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  114s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           98s                    node-controller  Node multinode-737786 event: Registered Node multinode-737786 in Controller
	
	
	==> dmesg <==
	[  +0.000003] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +1.008035] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000007] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000004] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +2.019826] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +4.091719] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000001] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000003] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[May22 19:00] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000006] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000005] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	[  +0.000002] IPv4: martian source 10.96.0.1 from 10.244.0.3, on dev br-b174e10eedee
	[  +0.000002] ll header: 00000000: 02 42 6d b9 14 64 02 42 c0 a8 43 02 08 00
	
	
	==> etcd [da66f04d5615] <==
	{"level":"info","ts":"2024-05-22T18:59:42.567319Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","added-peer-id":"8688e899f7831fc7","added-peer-peer-urls":["https://192.168.67.2:2380"]}
	{"level":"info","ts":"2024-05-22T18:59:42.567462Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9d8fdeb88b6def78","local-member-id":"8688e899f7831fc7","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:59:42.567505Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-22T18:59:42.568128Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:59:42.568206Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:59:42.568217Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-22T18:59:42.569813Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-22T18:59:42.570023Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"8688e899f7831fc7","initial-advertise-peer-urls":["https://192.168.67.2:2380"],"listen-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.67.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-22T18:59:42.570053Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:59:42.57014Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:59:42.57015Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:59:43.957456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 3"}
	{"level":"info","ts":"2024-05-22T18:59:43.957508Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:59:43.957542Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:59:43.957556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 4"}
	{"level":"info","ts":"2024-05-22T18:59:43.957562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-05-22T18:59:43.957572Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 4"}
	{"level":"info","ts":"2024-05-22T18:59:43.957581Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 4"}
	{"level":"info","ts":"2024-05-22T18:59:43.959182Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:59:43.959186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:59:43.959206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:59:43.959482Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:59:43.959515Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:59:43.961065Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-05-22T18:59:43.961073Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> etcd [eefaf11c384e] <==
	{"level":"info","ts":"2024-05-22T18:55:38.050381Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-22T18:55:38.967217Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967297Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.967325Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgPreVoteResp from 8688e899f7831fc7 at term 2"}
	{"level":"info","ts":"2024-05-22T18:55:38.96734Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became candidate at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967346Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 received MsgVoteResp from 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967355Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8688e899f7831fc7 became leader at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.967365Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 8688e899f7831fc7 elected leader 8688e899f7831fc7 at term 3"}
	{"level":"info","ts":"2024-05-22T18:55:38.969829Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"8688e899f7831fc7","local-member-attributes":"{Name:multinode-737786 ClientURLs:[https://192.168.67.2:2379]}","request-path":"/0/members/8688e899f7831fc7/attributes","cluster-id":"9d8fdeb88b6def78","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-22T18:55:38.969867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.969858Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-22T18:55:38.970074Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.970142Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-22T18:55:38.971872Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.67.2:2379"}
	{"level":"info","ts":"2024-05-22T18:55:38.971922Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-22T18:59:23.825415Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-05-22T18:59:23.825519Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	{"level":"warn","ts":"2024-05-22T18:59:23.825645Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:59:23.825766Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:59:23.848804Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-05-22T18:59:23.848859Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.67.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-05-22T18:59:23.848926Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"8688e899f7831fc7","current-leader-member-id":"8688e899f7831fc7"}
	{"level":"info","ts":"2024-05-22T18:59:23.85251Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:59:23.852666Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.67.2:2380"}
	{"level":"info","ts":"2024-05-22T18:59:23.85271Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-737786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.67.2:2380"],"advertise-client-urls":["https://192.168.67.2:2379"]}
	
	
	==> kernel <==
	 19:01:35 up  1:43,  0 users,  load average: 1.01, 0.44, 0.36
	Linux multinode-737786 5.15.0-1060-gcp #68~20.04.1-Ubuntu SMP Wed May 1 14:35:27 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.4 LTS"
	
	
	==> kindnet [43dd6bc557dd] <==
	I0522 18:57:22.532090       1 main.go:227] handling current node
	I0522 18:57:32.535480       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:32.535503       1 main.go:227] handling current node
	I0522 18:57:42.538762       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:42.538784       1 main.go:227] handling current node
	I0522 18:57:52.543959       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:57:52.543983       1 main.go:227] handling current node
	I0522 18:58:02.555704       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:02.555731       1 main.go:227] handling current node
	I0522 18:58:12.559567       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:12.559591       1 main.go:227] handling current node
	I0522 18:58:22.568605       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:22.568626       1 main.go:227] handling current node
	I0522 18:58:32.571486       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:32.571508       1 main.go:227] handling current node
	I0522 18:58:42.574487       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:42.574512       1 main.go:227] handling current node
	I0522 18:58:52.586300       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:58:52.586327       1 main.go:227] handling current node
	I0522 18:59:02.589702       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:02.589723       1 main.go:227] handling current node
	I0522 18:59:12.601733       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:12.601755       1 main.go:227] handling current node
	I0522 18:59:22.612201       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:22.612224       1 main.go:227] handling current node
	
	
	==> kindnet [c1c147896835] <==
	I0522 18:59:46.650299       1 main.go:116] setting mtu 1500 for CNI 
	I0522 18:59:46.650322       1 main.go:146] kindnetd IP family: "ipv4"
	I0522 18:59:46.650336       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0522 18:59:47.046184       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:47.046234       1 main.go:227] handling current node
	I0522 18:59:57.055837       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 18:59:57.055862       1 main.go:227] handling current node
	I0522 19:00:07.066203       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:07.066227       1 main.go:227] handling current node
	I0522 19:00:17.069274       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:17.069296       1 main.go:227] handling current node
	I0522 19:00:27.078658       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:27.078685       1 main.go:227] handling current node
	I0522 19:00:37.082292       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:37.082313       1 main.go:227] handling current node
	I0522 19:00:47.093926       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:47.093949       1 main.go:227] handling current node
	I0522 19:00:57.097223       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:00:57.097245       1 main.go:227] handling current node
	I0522 19:01:07.106324       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:01:07.106349       1 main.go:227] handling current node
	I0522 19:01:17.110053       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:01:17.110079       1 main.go:227] handling current node
	I0522 19:01:27.118122       1 main.go:223] Handling node with IPs: map[192.168.67.2:{}]
	I0522 19:01:27.118143       1 main.go:227] handling current node
	
	
	==> kube-apiserver [02b808dacb4f] <==
	I0522 18:59:44.885132       1 establishing_controller.go:76] Starting EstablishingController
	I0522 18:59:44.885150       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0522 18:59:44.885159       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0522 18:59:44.885183       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0522 18:59:44.889146       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0522 18:59:45.044251       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0522 18:59:45.044308       1 policy_source.go:224] refreshing policies
	I0522 18:59:45.044659       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0522 18:59:45.044827       1 shared_informer.go:320] Caches are synced for configmaps
	I0522 18:59:45.044876       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0522 18:59:45.044892       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0522 18:59:45.045331       1 aggregator.go:165] initial CRD sync complete...
	I0522 18:59:45.045344       1 autoregister_controller.go:141] Starting autoregister controller
	I0522 18:59:45.045350       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0522 18:59:45.045357       1 cache.go:39] Caches are synced for autoregister controller
	I0522 18:59:45.046369       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0522 18:59:45.046382       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0522 18:59:45.046409       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0522 18:59:45.046439       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0522 18:59:45.052475       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0522 18:59:45.055526       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E0522 18:59:45.057111       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0522 18:59:45.892555       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0522 18:59:57.905666       1 controller.go:615] quota admission added evaluator for: endpoints
	I0522 18:59:57.929692       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [495d862fbc88] <==
	W0522 18:59:33.135992       1 logging.go:59] [core] [Channel #178 SubChannel #179] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.144465       1 logging.go:59] [core] [Channel #70 SubChannel #71] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.154051       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.159651       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.176821       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.184347       1 logging.go:59] [core] [Channel #61 SubChannel #62] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.187717       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.194309       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.200824       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.311617       1 logging.go:59] [core] [Channel #34 SubChannel #35] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.345037       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.346315       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.382750       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.469930       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.490562       1 logging.go:59] [core] [Channel #199 SubChannel #200] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.495939       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.546835       1 logging.go:59] [core] [Channel #124 SubChannel #125] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.552283       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.653453       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.690429       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.725248       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.739365       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.740599       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.755436       1 logging.go:59] [core] [Channel #112 SubChannel #113] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0522 18:59:33.816983       1 logging.go:59] [core] [Channel #64 SubChannel #65] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [f57ae1200385] <==
	I0522 18:55:52.857093       1 shared_informer.go:320] Caches are synced for daemon sets
	I0522 18:55:52.858302       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:55:52.861613       1 shared_informer.go:320] Caches are synced for disruption
	I0522 18:55:52.862744       1 shared_informer.go:320] Caches are synced for stateful set
	I0522 18:55:52.868272       1 shared_informer.go:320] Caches are synced for attach detach
	I0522 18:55:52.868302       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0522 18:55:52.868326       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0522 18:55:52.868368       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0522 18:55:52.869529       1 shared_informer.go:320] Caches are synced for expand
	I0522 18:55:52.876074       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="63.118981ms"
	I0522 18:55:52.876376       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="117.821µs"
	I0522 18:55:52.907328       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0522 18:55:52.918533       1 shared_informer.go:320] Caches are synced for crt configmap
	I0522 18:55:52.953194       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0522 18:55:52.966173       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:52.967979       1 shared_informer.go:320] Caches are synced for job
	I0522 18:55:52.972293       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:55:53.014552       1 shared_informer.go:320] Caches are synced for cronjob
	I0522 18:55:53.051331       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0522 18:55:53.055811       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 18:55:53.485614       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518125       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:55:53.518148       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 18:56:15.529637       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.104196ms"
	I0522 18:56:15.529730       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="51.444µs"
	
	
	==> kube-controller-manager [fed7036e80a0] <==
	I0522 18:59:57.918041       1 shared_informer.go:320] Caches are synced for node
	I0522 18:59:57.918105       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0522 18:59:57.918145       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0522 18:59:57.918155       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0522 18:59:57.918159       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0522 18:59:57.920401       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.725168ms"
	I0522 18:59:57.920738       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="68.26µs"
	I0522 18:59:57.921524       1 shared_informer.go:320] Caches are synced for persistent volume
	I0522 18:59:57.922694       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0522 18:59:57.925433       1 shared_informer.go:320] Caches are synced for crt configmap
	I0522 18:59:57.926641       1 shared_informer.go:320] Caches are synced for GC
	I0522 18:59:57.927813       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0522 18:59:57.930970       1 shared_informer.go:320] Caches are synced for namespace
	I0522 18:59:57.932335       1 shared_informer.go:320] Caches are synced for job
	I0522 18:59:57.993460       1 shared_informer.go:320] Caches are synced for cronjob
	I0522 18:59:58.058578       1 shared_informer.go:320] Caches are synced for disruption
	I0522 18:59:58.086227       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:59:58.111716       1 shared_informer.go:320] Caches are synced for deployment
	I0522 18:59:58.124029       1 shared_informer.go:320] Caches are synced for attach detach
	I0522 18:59:58.131736       1 shared_informer.go:320] Caches are synced for resource quota
	I0522 18:59:58.547584       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:59:58.620740       1 shared_informer.go:320] Caches are synced for garbage collector
	I0522 18:59:58.620770       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0522 19:00:21.409950       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.801723ms"
	I0522 19:00:21.410055       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="64.963µs"
	
	
	==> kube-proxy [9e66337e0a3b] <==
	I0522 18:55:41.578062       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:55:41.643800       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:55:41.666145       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:55:41.666189       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:55:41.668333       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:55:41.668357       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:55:41.668379       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:55:41.668660       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:55:41.668683       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:41.669565       1 config.go:192] "Starting service config controller"
	I0522 18:55:41.669588       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:55:41.669604       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:55:41.669612       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:55:41.669709       1 config.go:319] "Starting node config controller"
	I0522 18:55:41.669715       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:55:41.770605       1 shared_informer.go:320] Caches are synced for node config
	I0522 18:55:41.770630       1 shared_informer.go:320] Caches are synced for service config
	I0522 18:55:41.770656       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ea070c29df0b] <==
	I0522 18:59:46.563698       1 server_linux.go:69] "Using iptables proxy"
	I0522 18:59:46.574724       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.67.2"]
	I0522 18:59:46.676364       1 server.go:659] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0522 18:59:46.676420       1 server_linux.go:165] "Using iptables Proxier"
	I0522 18:59:46.678550       1 server_linux.go:511] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0522 18:59:46.678577       1 server_linux.go:528] "Defaulting to no-op detect-local"
	I0522 18:59:46.678604       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0522 18:59:46.678852       1 server.go:872] "Version info" version="v1.30.1"
	I0522 18:59:46.678877       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:59:46.682303       1 config.go:319] "Starting node config controller"
	I0522 18:59:46.682371       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0522 18:59:46.682666       1 config.go:192] "Starting service config controller"
	I0522 18:59:46.682689       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0522 18:59:46.682709       1 config.go:101] "Starting endpoint slice config controller"
	I0522 18:59:46.682714       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0522 18:59:46.782832       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0522 18:59:46.782876       1 shared_informer.go:320] Caches are synced for node config
	I0522 18:59:46.782880       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [121cc1e00b0c] <==
	I0522 18:59:43.370630       1 serving.go:380] Generated self-signed cert in-memory
	W0522 18:59:44.955877       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0522 18:59:44.955924       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0522 18:59:44.955939       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0522 18:59:44.955948       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0522 18:59:44.968740       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0522 18:59:44.968765       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:59:44.970987       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0522 18:59:44.971104       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0522 18:59:44.971116       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:59:44.971140       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0522 18:59:45.071890       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [94cf43c9c185] <==
	I0522 18:55:38.604832       1 serving.go:380] Generated self-signed cert in-memory
	W0522 18:55:39.946054       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0522 18:55:39.946095       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0522 18:55:39.946107       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0522 18:55:39.946116       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0522 18:55:39.960130       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.1"
	I0522 18:55:39.960159       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0522 18:55:39.962719       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0522 18:55:39.962851       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0522 18:55:39.962872       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:55:39.962893       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0522 18:55:40.163188       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0522 18:59:23.857792       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0522 18:59:23.857885       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0522 18:59:23.858035       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0522 18:59:23.858267       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 22 18:59:45 multinode-737786 kubelet[1452]: I0522 18:59:45.565266    1452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75-lib-modules\") pod \"kube-proxy-kqtgj\" (UID: \"b8e5e1ef-4bb9-47e5-ba4d-860f8cfd8f75\") " pod="kube-system/kube-proxy-kqtgj"
	May 22 18:59:45 multinode-737786 kubelet[1452]: I0522 18:59:45.565371    1452 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e454b0cd-e618-4268-8882-69d2a4544917-lib-modules\") pod \"kindnet-qpfbl\" (UID: \"e454b0cd-e618-4268-8882-69d2a4544917\") " pod="kube-system/kindnet-qpfbl"
	May 22 18:59:46 multinode-737786 kubelet[1452]: I0522 18:59:46.343983    1452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d51534e7f5ad8482121e40a42d9ccec0c35a4ae6d8a4e3ff70c7291f23f2961d"
	May 22 18:59:46 multinode-737786 kubelet[1452]: I0522 18:59:46.355788    1452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ade5c6026a9902b0790131f6feea45d6128d4eed74dc9b44e4340897da13d50"
	May 22 18:59:46 multinode-737786 kubelet[1452]: I0522 18:59:46.475622    1452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="791fd98a936b1eea862345c8ad291c2c2a79c5df22d31e4211362ad3b8f49bdc"
	May 22 18:59:46 multinode-737786 kubelet[1452]: I0522 18:59:46.488957    1452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ceec27a8063d65a466b0fef71db1c9771e265ff911a3b402b56487cf3ea342fb"
	May 22 18:59:46 multinode-737786 kubelet[1452]: I0522 18:59:46.572423    1452 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49ece9078636649013fcd07eaef9f26ad9ccd024d2bbb8163e4c9a5936f92719"
	May 22 18:59:46 multinode-737786 kubelet[1452]: E0522 18:59:46.660034    1452 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-multinode-737786\" already exists" pod="kube-system/kube-apiserver-multinode-737786"
	May 22 18:59:46 multinode-737786 kubelet[1452]: E0522 18:59:46.663077    1452 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-multinode-737786\" already exists" pod="kube-system/kube-controller-manager-multinode-737786"
	May 22 18:59:48 multinode-737786 kubelet[1452]: I0522 18:59:48.642120    1452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:59:51 multinode-737786 kubelet[1452]: I0522 18:59:51.392388    1452 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 22 18:59:51 multinode-737786 kubelet[1452]: E0522 18:59:51.594842    1452 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 18:59:51 multinode-737786 kubelet[1452]: E0522 18:59:51.594928    1452 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 19:00:01 multinode-737786 kubelet[1452]: E0522 19:00:01.614926    1452 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 19:00:01 multinode-737786 kubelet[1452]: E0522 19:00:01.614960    1452 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 19:00:11 multinode-737786 kubelet[1452]: E0522 19:00:11.633828    1452 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 19:00:11 multinode-737786 kubelet[1452]: E0522 19:00:11.633862    1452 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 19:00:16 multinode-737786 kubelet[1452]: I0522 19:00:16.849283    1452 scope.go:117] "RemoveContainer" containerID="2775772a4970afd809b1153b9f9a8566798719af126fd0dbf43b5d09a37d5d40"
	May 22 19:00:16 multinode-737786 kubelet[1452]: I0522 19:00:16.849625    1452 scope.go:117] "RemoveContainer" containerID="0ecd788446e0396802ea7948c22331d27ffcd081334543786c2a5603aa0066b3"
	May 22 19:00:16 multinode-737786 kubelet[1452]: E0522 19:00:16.849925    1452 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(5d953629-c86b-47be-84da-baa3bdf24d2e)\"" pod="kube-system/storage-provisioner" podUID="5d953629-c86b-47be-84da-baa3bdf24d2e"
	May 22 19:00:21 multinode-737786 kubelet[1452]: E0522 19:00:21.652001    1452 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 19:00:21 multinode-737786 kubelet[1452]: E0522 19:00:21.652037    1452 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	May 22 19:00:29 multinode-737786 kubelet[1452]: I0522 19:00:29.449812    1452 scope.go:117] "RemoveContainer" containerID="0ecd788446e0396802ea7948c22331d27ffcd081334543786c2a5603aa0066b3"
	May 22 19:00:31 multinode-737786 kubelet[1452]: E0522 19:00:31.670314    1452 summary_sys_containers.go:48] "Failed to get system container stats" err="failed to get cgroup stats for \"/kubepods\": failed to get container info for \"/kubepods\": unknown container \"/kubepods\"" containerName="/kubepods"
	May 22 19:00:31 multinode-737786 kubelet[1452]: E0522 19:00:31.670374    1452 helpers.go:857] "Eviction manager: failed to construct signal" err="system container \"pods\" not found in metrics" signal="allocatableMemory.available"
	
	
	==> storage-provisioner [0ecd788446e0] <==
	I0522 18:59:46.450170       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0522 19:00:16.452857       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [12ad1ebd390e] <==
	I0522 19:00:29.527476       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0522 19:00:29.534283       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0522 19:00:29.534319       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0522 19:00:46.929171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0522 19:00:46.929225       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3dea4377-4a66-4281-97bf-645c7b9c6dfb", APIVersion:"v1", ResourceVersion:"1912", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' multinode-737786_11ded4b5-3ebf-430b-83be-0a2521720b5a became leader
	I0522 19:00:46.929312       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_multinode-737786_11ded4b5-3ebf-430b-83be-0a2521720b5a!
	I0522 19:00:47.029547       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_multinode-737786_11ded4b5-3ebf-430b-83be-0a2521720b5a!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-737786 -n multinode-737786
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-737786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-cq58n
helpers_test.go:274: ======> post-mortem[TestMultiNode/serial/RestartMultiNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n
helpers_test.go:282: (dbg) kubectl --context multinode-737786 describe pod busybox-fc5497c4f-cq58n:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-cq58n
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t8bjg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-t8bjg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  111s                 default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  9m57s (x4 over 25m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5m56s                default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (121.89s)

                                                
                                    

Test pass (295/342)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 40.03
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.05
9 TestDownloadOnly/v1.20.0/DeleteAll 0.18
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.11
12 TestDownloadOnly/v1.30.1/json-events 13.99
13 TestDownloadOnly/v1.30.1/preload-exists 0
17 TestDownloadOnly/v1.30.1/LogsDuration 0.05
18 TestDownloadOnly/v1.30.1/DeleteAll 0.18
19 TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds 0.11
20 TestDownloadOnlyKic 0.98
21 TestBinaryMirror 0.68
22 TestOffline 62.05
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.04
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.04
27 TestAddons/Setup 134.12
29 TestAddons/parallel/Registry 15.09
30 TestAddons/parallel/Ingress 20.68
31 TestAddons/parallel/InspektorGadget 11.6
32 TestAddons/parallel/MetricsServer 5.7
33 TestAddons/parallel/HelmTiller 11.71
35 TestAddons/parallel/CSI 41.25
36 TestAddons/parallel/Headlamp 12.64
37 TestAddons/parallel/CloudSpanner 5.38
38 TestAddons/parallel/LocalPath 54.53
39 TestAddons/parallel/NvidiaDevicePlugin 6.43
40 TestAddons/parallel/Yakd 6
43 TestAddons/serial/GCPAuth/Namespaces 0.11
44 TestAddons/StoppedEnableDisable 11.01
45 TestCertOptions 30.03
46 TestCertExpiration 229.32
47 TestDockerFlags 25.37
48 TestForceSystemdFlag 31.47
49 TestForceSystemdEnv 35.07
51 TestKVMDriverInstallOrUpdate 4.9
55 TestErrorSpam/setup 20.54
56 TestErrorSpam/start 0.53
57 TestErrorSpam/status 0.79
58 TestErrorSpam/pause 1.05
59 TestErrorSpam/unpause 1.08
60 TestErrorSpam/stop 10.81
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 37.25
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 32.7
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.08
71 TestFunctional/serial/CacheCmd/cache/add_remote 2.27
72 TestFunctional/serial/CacheCmd/cache/add_local 1.68
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.04
74 TestFunctional/serial/CacheCmd/cache/list 0.04
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.14
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.09
80 TestFunctional/serial/ExtraConfig 38.99
81 TestFunctional/serial/ComponentHealth 0.06
82 TestFunctional/serial/LogsCmd 0.88
83 TestFunctional/serial/LogsFileCmd 0.91
84 TestFunctional/serial/InvalidService 4.22
86 TestFunctional/parallel/ConfigCmd 0.36
87 TestFunctional/parallel/DashboardCmd 15.96
88 TestFunctional/parallel/DryRun 0.31
89 TestFunctional/parallel/InternationalLanguage 0.14
90 TestFunctional/parallel/StatusCmd 0.81
94 TestFunctional/parallel/ServiceCmdConnect 21.63
95 TestFunctional/parallel/AddonsCmd 0.13
96 TestFunctional/parallel/PersistentVolumeClaim 41.98
98 TestFunctional/parallel/SSHCmd 0.54
99 TestFunctional/parallel/CpCmd 1.68
100 TestFunctional/parallel/MySQL 23.45
101 TestFunctional/parallel/FileSync 0.29
102 TestFunctional/parallel/CertSync 1.73
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.24
110 TestFunctional/parallel/License 0.6
112 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.44
113 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
115 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 20.27
116 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
117 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
121 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
122 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
123 TestFunctional/parallel/ProfileCmd/profile_not_create 0.32
124 TestFunctional/parallel/MountCmd/any-port 7.67
125 TestFunctional/parallel/ProfileCmd/profile_list 0.33
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.3
127 TestFunctional/parallel/ServiceCmd/List 1.66
128 TestFunctional/parallel/ServiceCmd/JSONOutput 1.67
129 TestFunctional/parallel/MountCmd/specific-port 1.99
130 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
131 TestFunctional/parallel/ServiceCmd/Format 0.55
132 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
133 TestFunctional/parallel/ServiceCmd/URL 0.72
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.19
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
138 TestFunctional/parallel/ImageCommands/ImageBuild 3.53
139 TestFunctional/parallel/ImageCommands/Setup 2.13
140 TestFunctional/parallel/DockerEnv/bash 0.97
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.18
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
145 TestFunctional/parallel/Version/short 0.04
146 TestFunctional/parallel/Version/components 0.44
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.56
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.77
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.39
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.92
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.97
153 TestFunctional/delete_addon-resizer_images 0.07
154 TestFunctional/delete_my-image_image 0.02
155 TestFunctional/delete_minikube_cached_images 0.01
163 TestMultiControlPlane/serial/NodeLabels 0.05
165 TestMultiControlPlane/serial/CopyFile 3.71
180 TestImageBuild/serial/Setup 20.95
181 TestImageBuild/serial/NormalBuild 2.7
182 TestImageBuild/serial/BuildWithBuildArg 0.92
183 TestImageBuild/serial/BuildWithDockerIgnore 0.8
184 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.8
188 TestJSONOutput/start/Command 77.36
189 TestJSONOutput/start/Audit 0
191 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/pause/Command 0.45
195 TestJSONOutput/pause/Audit 0
197 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/unpause/Command 0.4
201 TestJSONOutput/unpause/Audit 0
203 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
206 TestJSONOutput/stop/Command 10.77
207 TestJSONOutput/stop/Audit 0
209 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
210 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
211 TestErrorJSONOutput 0.18
213 TestKicCustomNetwork/create_custom_network 23.14
214 TestKicCustomNetwork/use_default_bridge_network 267.2
215 TestKicExistingNetwork 249.01
216 TestKicCustomSubnet 25.3
217 TestKicStaticIP 25.96
218 TestMainNoArgs 0.04
219 TestMinikubeProfile 51.9
222 TestMountStart/serial/StartWithMountFirst 10.01
223 TestMountStart/serial/VerifyMountFirst 0.23
224 TestMountStart/serial/StartWithMountSecond 9.47
225 TestMountStart/serial/VerifyMountSecond 0.22
226 TestMountStart/serial/DeleteFirst 1.44
227 TestMountStart/serial/VerifyMountPostDelete 0.23
228 TestMountStart/serial/Stop 1.16
229 TestMountStart/serial/RestartStopped 8.19
230 TestMountStart/serial/VerifyMountPostStop 0.22
237 TestMultiNode/serial/MultiNodeLabels 0.05
238 TestMultiNode/serial/ProfileList 0.26
239 TestMultiNode/serial/CopyFile 7.97
246 TestMultiNode/serial/ValidateNameConflict 25.87
251 TestPreload 170.77
253 TestScheduledStopUnix 96.51
254 TestSkaffold 110.86
256 TestInsufficientStorage 12.56
257 TestRunningBinaryUpgrade 141.71
259 TestKubernetesUpgrade 338.9
260 TestMissingContainerUpgrade 182.34
272 TestStoppedBinaryUpgrade/Setup 2.55
273 TestStoppedBinaryUpgrade/Upgrade 89.72
274 TestStoppedBinaryUpgrade/MinikubeLogs 1.05
276 TestPause/serial/Start 45.76
285 TestNoKubernetes/serial/StartNoK8sWithVersion 0.06
286 TestNoKubernetes/serial/StartWithK8s 25.69
287 TestNoKubernetes/serial/StartWithStopK8s 16.55
288 TestPause/serial/SecondStartNoReconfiguration 31.74
289 TestNoKubernetes/serial/Start 6.13
290 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
291 TestNoKubernetes/serial/ProfileList 2.62
292 TestNoKubernetes/serial/Stop 1.17
293 TestNoKubernetes/serial/StartNoArgs 7.37
294 TestPause/serial/Pause 0.48
295 TestPause/serial/VerifyStatus 0.29
296 TestPause/serial/Unpause 0.47
297 TestPause/serial/PauseAgain 0.64
298 TestPause/serial/DeletePaused 2.16
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
300 TestPause/serial/VerifyDeletedResources 0.53
301 TestNetworkPlugins/group/auto/Start 77.81
302 TestNetworkPlugins/group/kindnet/Start 53.2
303 TestNetworkPlugins/group/calico/Start 67.64
304 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
305 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
306 TestNetworkPlugins/group/kindnet/NetCatPod 9.24
307 TestNetworkPlugins/group/kindnet/DNS 0.15
308 TestNetworkPlugins/group/kindnet/Localhost 0.12
309 TestNetworkPlugins/group/kindnet/HairPin 0.14
310 TestNetworkPlugins/group/auto/KubeletFlags 0.25
311 TestNetworkPlugins/group/auto/NetCatPod 9.22
312 TestNetworkPlugins/group/auto/DNS 0.14
313 TestNetworkPlugins/group/auto/Localhost 0.11
314 TestNetworkPlugins/group/custom-flannel/Start 58.57
315 TestNetworkPlugins/group/auto/HairPin 0.13
316 TestNetworkPlugins/group/false/Start 43.54
317 TestNetworkPlugins/group/enable-default-cni/Start 76.67
318 TestNetworkPlugins/group/calico/ControllerPod 6.01
319 TestNetworkPlugins/group/calico/KubeletFlags 0.31
320 TestNetworkPlugins/group/calico/NetCatPod 10.28
321 TestNetworkPlugins/group/calico/DNS 0.15
322 TestNetworkPlugins/group/calico/Localhost 0.12
323 TestNetworkPlugins/group/calico/HairPin 0.13
324 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.27
325 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.2
326 TestNetworkPlugins/group/false/KubeletFlags 0.25
327 TestNetworkPlugins/group/false/NetCatPod 10.18
328 TestNetworkPlugins/group/custom-flannel/DNS 0.14
329 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
330 TestNetworkPlugins/group/custom-flannel/HairPin 0.12
331 TestNetworkPlugins/group/flannel/Start 52.91
332 TestNetworkPlugins/group/false/DNS 0.16
333 TestNetworkPlugins/group/false/Localhost 0.13
334 TestNetworkPlugins/group/false/HairPin 0.14
335 TestNetworkPlugins/group/bridge/Start 41.78
336 TestNetworkPlugins/group/kubenet/Start 79.45
337 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
338 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
339 TestNetworkPlugins/group/enable-default-cni/DNS 0.15
340 TestNetworkPlugins/group/enable-default-cni/Localhost 0.13
341 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
342 TestNetworkPlugins/group/flannel/ControllerPod 6.01
343 TestNetworkPlugins/group/flannel/KubeletFlags 0.28
344 TestNetworkPlugins/group/flannel/NetCatPod 10.2
346 TestStartStop/group/old-k8s-version/serial/FirstStart 146.43
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
348 TestNetworkPlugins/group/bridge/NetCatPod 12.25
349 TestNetworkPlugins/group/flannel/DNS 0.13
350 TestNetworkPlugins/group/flannel/Localhost 0.12
351 TestNetworkPlugins/group/flannel/HairPin 0.12
352 TestNetworkPlugins/group/bridge/DNS 0.13
353 TestNetworkPlugins/group/bridge/Localhost 0.12
354 TestNetworkPlugins/group/bridge/HairPin 0.1
356 TestStartStop/group/no-preload/serial/FirstStart 54.12
358 TestStartStop/group/embed-certs/serial/FirstStart 44.08
359 TestNetworkPlugins/group/kubenet/KubeletFlags 0.31
360 TestNetworkPlugins/group/kubenet/NetCatPod 10.23
361 TestNetworkPlugins/group/kubenet/DNS 0.17
362 TestNetworkPlugins/group/kubenet/Localhost 0.15
363 TestNetworkPlugins/group/kubenet/HairPin 0.15
365 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 38.48
366 TestStartStop/group/embed-certs/serial/DeployApp 11.23
367 TestStartStop/group/no-preload/serial/DeployApp 10.25
368 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.89
369 TestStartStop/group/embed-certs/serial/Stop 10.7
370 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.79
371 TestStartStop/group/no-preload/serial/Stop 10.74
372 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.16
373 TestStartStop/group/embed-certs/serial/SecondStart 262.79
374 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.16
375 TestStartStop/group/no-preload/serial/SecondStart 263.1
376 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.3
377 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.76
378 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.65
379 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
380 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 262.74
381 TestStartStop/group/old-k8s-version/serial/DeployApp 10.46
382 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.87
383 TestStartStop/group/old-k8s-version/serial/Stop 10.74
384 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.18
385 TestStartStop/group/old-k8s-version/serial/SecondStart 123.33
386 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
388 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.2
389 TestStartStop/group/old-k8s-version/serial/Pause 2.27
391 TestStartStop/group/newest-cni/serial/FirstStart 38.17
392 TestStartStop/group/newest-cni/serial/DeployApp 0
393 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.02
394 TestStartStop/group/newest-cni/serial/Stop 5.65
395 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.16
396 TestStartStop/group/newest-cni/serial/SecondStart 14.65
397 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
398 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
399 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
400 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
401 TestStartStop/group/newest-cni/serial/Pause 2.18
402 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
403 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
404 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.19
405 TestStartStop/group/embed-certs/serial/Pause 2.23
406 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
407 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.19
408 TestStartStop/group/no-preload/serial/Pause 2.26
409 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
410 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
411 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
412 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.21
x
+
TestDownloadOnly/v1.20.0/json-events (40.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-590167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-590167 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (40.032699487s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (40.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-590167
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-590167: exit status 85 (51.296745ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-590167 | jenkins | v1.33.1 | 22 May 24 17:44 UTC |          |
	|         | -p download-only-590167        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:44:14
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:44:14.109547   16680 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:44:14.109788   16680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:44:14.109797   16680 out.go:304] Setting ErrFile to fd 2...
	I0522 17:44:14.109804   16680 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:44:14.109964   16680 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	W0522 17:44:14.110092   16680 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18943-9771/.minikube/config/config.json: open /home/jenkins/minikube-integration/18943-9771/.minikube/config/config.json: no such file or directory
	I0522 17:44:14.110645   16680 out.go:298] Setting JSON to true
	I0522 17:44:14.111538   16680 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1598,"bootTime":1716398256,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:44:14.111593   16680 start.go:139] virtualization: kvm guest
	I0522 17:44:14.113943   16680 out.go:97] [download-only-590167] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:44:14.115256   16680 out.go:169] MINIKUBE_LOCATION=18943
	W0522 17:44:14.114034   16680 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball: no such file or directory
	I0522 17:44:14.114085   16680 notify.go:220] Checking for updates...
	I0522 17:44:14.117778   16680 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:44:14.119088   16680 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:44:14.120282   16680 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:44:14.121426   16680 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0522 17:44:14.123734   16680 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0522 17:44:14.124059   16680 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:44:14.144845   16680 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:44:14.144966   16680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:44:14.480081   16680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-22 17:44:14.471354161 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:44:14.480210   16680 docker.go:295] overlay module found
	I0522 17:44:14.481992   16680 out.go:97] Using the docker driver based on user configuration
	I0522 17:44:14.482011   16680 start.go:297] selected driver: docker
	I0522 17:44:14.482023   16680 start.go:901] validating driver "docker" against <nil>
	I0522 17:44:14.482110   16680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:44:14.526456   16680 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-05-22 17:44:14.518474665 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:44:14.526628   16680 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:44:14.527106   16680 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0522 17:44:14.527253   16680 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0522 17:44:14.529042   16680 out.go:169] Using Docker driver with root privileges
	I0522 17:44:14.530082   16680 cni.go:84] Creating CNI manager for ""
	I0522 17:44:14.530099   16680 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0522 17:44:14.530153   16680 start.go:340] cluster config:
	{Name:download-only-590167 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-590167 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:44:14.531303   16680 out.go:97] Starting "download-only-590167" primary control-plane node in "download-only-590167" cluster
	I0522 17:44:14.531319   16680 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:44:14.532400   16680 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:44:14.532418   16680 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0522 17:44:14.532514   16680 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:44:14.546499   16680 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0522 17:44:14.546630   16680 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0522 17:44:14.546708   16680 image.go:118] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0522 17:44:14.639194   16680 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0522 17:44:14.639215   16680 cache.go:56] Caching tarball of preloaded images
	I0522 17:44:14.639367   16680 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0522 17:44:14.641132   16680 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0522 17:44:14.641147   16680 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0522 17:44:14.750317   16680 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0522 17:44:27.960296   16680 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0522 17:44:27.960385   16680 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0522 17:44:28.235307   16680 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0522 17:44:28.845154   16680 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0522 17:44:28.845490   16680 profile.go:143] Saving config to /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/download-only-590167/config.json ...
	I0522 17:44:28.845528   16680 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/download-only-590167/config.json: {Name:mk5621794d824816b15bf0a96a312a416dffbce6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0522 17:44:28.845736   16680 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0522 17:44:28.845950   16680 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18943-9771/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-590167 host does not exist
	  To start a cluster, run: "minikube start -p download-only-590167"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-590167
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/json-events (13.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-527247 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-527247 --force --alsologtostderr --kubernetes-version=v1.30.1 --container-runtime=docker --driver=docker  --container-runtime=docker: (13.990168753s)
--- PASS: TestDownloadOnly/v1.30.1/json-events (13.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/preload-exists
--- PASS: TestDownloadOnly/v1.30.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-527247
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-527247: exit status 85 (53.253558ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-590167 | jenkins | v1.33.1 | 22 May 24 17:44 UTC |                     |
	|         | -p download-only-590167        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 22 May 24 17:44 UTC | 22 May 24 17:44 UTC |
	| delete  | -p download-only-590167        | download-only-590167 | jenkins | v1.33.1 | 22 May 24 17:44 UTC | 22 May 24 17:44 UTC |
	| start   | -o=json --download-only        | download-only-527247 | jenkins | v1.33.1 | 22 May 24 17:44 UTC |                     |
	|         | -p download-only-527247        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.1   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/22 17:44:54
	Running on machine: ubuntu-20-agent-10
	Binary: Built with gc go1.22.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0522 17:44:54.490505   17113 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:44:54.490615   17113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:44:54.490623   17113 out.go:304] Setting ErrFile to fd 2...
	I0522 17:44:54.490628   17113 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:44:54.490814   17113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:44:54.491367   17113 out.go:298] Setting JSON to true
	I0522 17:44:54.492207   17113 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":1638,"bootTime":1716398256,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:44:54.492260   17113 start.go:139] virtualization: kvm guest
	I0522 17:44:54.494288   17113 out.go:97] [download-only-527247] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:44:54.495611   17113 out.go:169] MINIKUBE_LOCATION=18943
	I0522 17:44:54.494415   17113 notify.go:220] Checking for updates...
	I0522 17:44:54.498054   17113 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:44:54.499289   17113 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:44:54.500374   17113 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:44:54.501467   17113 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0522 17:44:54.503446   17113 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0522 17:44:54.503628   17113 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:44:54.523355   17113 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:44:54.523437   17113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:44:54.570539   17113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-22 17:44:54.560834543 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:44:54.570662   17113 docker.go:295] overlay module found
	I0522 17:44:54.572333   17113 out.go:97] Using the docker driver based on user configuration
	I0522 17:44:54.572352   17113 start.go:297] selected driver: docker
	I0522 17:44:54.572363   17113 start.go:901] validating driver "docker" against <nil>
	I0522 17:44:54.572441   17113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:44:54.617898   17113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:30 OomKillDisable:true NGoroutines:46 SystemTime:2024-05-22 17:44:54.610011251 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:44:54.618066   17113 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0522 17:44:54.618562   17113 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0522 17:44:54.618733   17113 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0522 17:44:54.620346   17113 out.go:169] Using Docker driver with root privileges
	I0522 17:44:54.621407   17113 cni.go:84] Creating CNI manager for ""
	I0522 17:44:54.621427   17113 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0522 17:44:54.621444   17113 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0522 17:44:54.621513   17113 start.go:340] cluster config:
	{Name:download-only-527247 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:download-only-527247 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:44:54.622750   17113 out.go:97] Starting "download-only-527247" primary control-plane node in "download-only-527247" cluster
	I0522 17:44:54.622776   17113 cache.go:121] Beginning downloading kic base image for docker with docker
	I0522 17:44:54.624071   17113 out.go:97] Pulling base image v0.0.44-1715707529-18887 ...
	I0522 17:44:54.624097   17113 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:44:54.624209   17113 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local docker daemon
	I0522 17:44:54.638568   17113 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a to local cache
	I0522 17:44:54.638666   17113 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory
	I0522 17:44:54.638692   17113 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a in local cache directory, skipping pull
	I0522 17:44:54.638699   17113 image.go:105] gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a exists in cache, skipping pull
	I0522 17:44:54.638706   17113 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a as a tarball
	I0522 17:44:55.063539   17113 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	I0522 17:44:55.063571   17113 cache.go:56] Caching tarball of preloaded images
	I0522 17:44:55.063744   17113 preload.go:132] Checking if preload exists for k8s version v1.30.1 and runtime docker
	I0522 17:44:55.065542   17113 out.go:97] Downloading Kubernetes v1.30.1 preload ...
	I0522 17:44:55.065564   17113 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4 ...
	I0522 17:44:55.605927   17113 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.1/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4?checksum=md5:f110de85c4cd01fa5de0726fbc529387 -> /home/jenkins/minikube-integration/18943-9771/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.1-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-527247 host does not exist
	  To start a cluster, run: "minikube start -p download-only-527247"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.1/LogsDuration (0.05s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAll (0.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.1/DeleteAll (0.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-527247
--- PASS: TestDownloadOnly/v1.30.1/DeleteAlwaysSucceeds (0.11s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.98s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-802596 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-802596" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-802596
--- PASS: TestDownloadOnlyKic (0.98s)

                                                
                                    
x
+
TestBinaryMirror (0.68s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-684903 --alsologtostderr --binary-mirror http://127.0.0.1:35715 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-684903" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-684903
--- PASS: TestBinaryMirror (0.68s)

                                                
                                    
x
+
TestOffline (62.05s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-000539 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-000539 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (59.917172544s)
helpers_test.go:175: Cleaning up "offline-docker-000539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-000539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-000539: (2.134724205s)
--- PASS: TestOffline (62.05s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-340431
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-340431: exit status 85 (44.795475ms)

                                                
                                                
-- stdout --
	* Profile "addons-340431" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340431"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-340431
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-340431: exit status 85 (42.756745ms)

                                                
                                                
-- stdout --
	* Profile "addons-340431" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-340431"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.04s)

                                                
                                    
x
+
TestAddons/Setup (134.12s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-340431 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-340431 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m14.11555494s)
--- PASS: TestAddons/Setup (134.12s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.09s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 15.00103ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9zng4" [086e5735-1510-404f-a9ba-b4a3da172adb] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.004328901s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-4mdmx" [2c3f2fb3-12c1-4534-8cc1-ee94d1890f7b] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.037532211s
addons_test.go:340: (dbg) Run:  kubectl --context addons-340431 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-340431 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-340431 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.129314679s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 ip
2024/05/22 17:47:39 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.09s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-340431 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-340431 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-340431 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [b2b906a4-f9bc-4155-a522-b5471baed230] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [b2b906a4-f9bc-4155-a522-b5471baed230] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003483651s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-340431 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-340431 addons disable ingress --alsologtostderr -v=1: (7.53279508s)
--- PASS: TestAddons/parallel/Ingress (20.68s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.6s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-dr4hp" [91eb9ba2-0f24-4efd-8b6c-619fdbc35bda] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004369752s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-340431
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-340431: (5.59072959s)
--- PASS: TestAddons/parallel/InspektorGadget (11.60s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 1.774715ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-82lfr" [1ed00451-f84b-4e62-b769-9d753f3238b1] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003520936s
addons_test.go:415: (dbg) Run:  kubectl --context addons-340431 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.71s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 1.802576ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-b2ddq" [9d9157af-4627-4bf5-8b2c-4dd2e1ddd0a0] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.003572752s
addons_test.go:473: (dbg) Run:  kubectl --context addons-340431 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-340431 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.268351528s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.71s)

                                                
                                    
x
+
TestAddons/parallel/CSI (41.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 13.901726ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-340431 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-340431 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [0bbb2397-527a-43f3-ab9a-b844e82f4e10] Pending
helpers_test.go:344: "task-pv-pod" [0bbb2397-527a-43f3-ab9a-b844e82f4e10] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [0bbb2397-527a-43f3-ab9a-b844e82f4e10] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 15.00256043s
addons_test.go:584: (dbg) Run:  kubectl --context addons-340431 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340431 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-340431 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-340431 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-340431 delete pod task-pv-pod: (1.076668771s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-340431 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-340431 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-340431 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [13543aa0-2c0b-413a-9378-455b51c36deb] Pending
helpers_test.go:344: "task-pv-pod-restore" [13543aa0-2c0b-413a-9378-455b51c36deb] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [13543aa0-2c0b-413a-9378-455b51c36deb] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003938356s
addons_test.go:626: (dbg) Run:  kubectl --context addons-340431 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-340431 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-340431 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-340431 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.363161615s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (41.25s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.64s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-340431 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-488pz" [2e5f5c15-3b96-428f-81b1-bb8e6e9f491f] Pending
helpers_test.go:344: "headlamp-68456f997b-488pz" [2e5f5c15-3b96-428f-81b1-bb8e6e9f491f] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-488pz" [2e5f5c15-3b96-428f-81b1-bb8e6e9f491f] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.00366764s
--- PASS: TestAddons/parallel/Headlamp (12.64s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-vbblq" [015da1bb-7e3d-4f2c-a051-9524ea38ab7a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003206154s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-340431
--- PASS: TestAddons/parallel/CloudSpanner (5.38s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.53s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-340431 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-340431 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [e72851fe-9267-4457-b5b3-b646cd4bbdbd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [e72851fe-9267-4457-b5b3-b646cd4bbdbd] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [e72851fe-9267-4457-b5b3-b646cd4bbdbd] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.00245767s
addons_test.go:891: (dbg) Run:  kubectl --context addons-340431 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 ssh "cat /opt/local-path-provisioner/pvc-dae51d3b-50cc-480b-ae39-9045241fb98f_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-340431 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-340431 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-340431 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-340431 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.749341436s)
--- PASS: TestAddons/parallel/LocalPath (54.53s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-szv58" [a7af1307-7d93-4cb1-88d4-0d1b49414651] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004604876s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-340431
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.43s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-f979w" [3b90a157-23ed-4544-932c-8e54b3a89347] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003716898s
--- PASS: TestAddons/parallel/Yakd (6.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-340431 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-340431 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.01s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-340431
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-340431: (10.792181907s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-340431
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-340431
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-340431
--- PASS: TestAddons/StoppedEnableDisable (11.01s)

                                                
                                    
x
+
TestCertOptions (30.03s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-481269 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-481269 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (27.457520722s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-481269 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-481269 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-481269 -- "sudo cat /etc/kubernetes/admin.conf"
E0522 19:12:24.838553   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
helpers_test.go:175: Cleaning up "cert-options-481269" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-481269
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-481269: (1.990227149s)
--- PASS: TestCertOptions (30.03s)

                                                
                                    
x
+
TestCertExpiration (229.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-068163 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-068163 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=docker: (23.860244643s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-068163 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-068163 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (21.755401773s)
helpers_test.go:175: Cleaning up "cert-expiration-068163" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-068163
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-068163: (3.695601973s)
--- PASS: TestCertExpiration (229.32s)

                                                
                                    
x
+
TestDockerFlags (25.37s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-201021 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-201021 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.880581194s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-201021 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-201021 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-201021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-201021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-201021: (1.992377001s)
--- PASS: TestDockerFlags (25.37s)

                                                
                                    
x
+
TestForceSystemdFlag (31.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-663021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-663021 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (28.572355177s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-663021 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-663021" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-663021
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-663021: (2.513528047s)
--- PASS: TestForceSystemdFlag (31.47s)

                                                
                                    
x
+
TestForceSystemdEnv (35.07s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-974134 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-974134 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (32.635626097s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-974134 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-974134" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-974134
E0522 19:11:55.310749   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-974134: (2.106422236s)
--- PASS: TestForceSystemdEnv (35.07s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (4.9s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (4.90s)

                                                
                                    
x
+
TestErrorSpam/setup (20.54s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-933774 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-933774 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-933774 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-933774 --driver=docker  --container-runtime=docker: (20.535683392s)
--- PASS: TestErrorSpam/setup (20.54s)

                                                
                                    
x
+
TestErrorSpam/start (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 start --dry-run
--- PASS: TestErrorSpam/start (0.53s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 pause
--- PASS: TestErrorSpam/pause (1.05s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 unpause
--- PASS: TestErrorSpam/unpause (1.08s)

                                                
                                    
x
+
TestErrorSpam/stop (10.81s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 stop: (10.646274611s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-933774 --log_dir /tmp/nospam-933774 stop
--- PASS: TestErrorSpam/stop (10.81s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18943-9771/.minikube/files/etc/test/nested/copy/16668/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-164981 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (37.245068852s)
--- PASS: TestFunctional/serial/StartWithProxy (37.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32.7s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-164981 --alsologtostderr -v=8: (32.694340269s)
functional_test.go:659: soft start took 32.694991846s for "functional-164981" cluster.
--- PASS: TestFunctional/serial/SoftStart (32.70s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-164981 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-164981 /tmp/TestFunctionalserialCacheCmdcacheadd_local2711678358/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache add minikube-local-cache-test:functional-164981
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 cache add minikube-local-cache-test:functional-164981: (1.340388494s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache delete minikube-local-cache-test:functional-164981
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-164981
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (244.79537ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 kubectl -- --context functional-164981 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-164981 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.09s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-164981 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.98667049s)
functional_test.go:757: restart took 38.986772774s for "functional-164981" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-164981 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (0.88s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 logs
--- PASS: TestFunctional/serial/LogsCmd (0.88s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 logs --file /tmp/TestFunctionalserialLogsFileCmd141621596/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.91s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-164981 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-164981
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-164981: exit status 115 (294.472756ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:32658 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-164981 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 config get cpus: exit status 14 (77.686226ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 config get cpus: exit status 14 (48.980526ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-164981 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-164981 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 61888: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.96s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-164981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (131.361025ms)

                                                
                                                
-- stdout --
	* [functional-164981] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 17:52:20.504993   61445 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:20.505243   61445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:20.505253   61445 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:20.505257   61445 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:20.505477   61445 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:20.506035   61445 out.go:298] Setting JSON to false
	I0522 17:52:20.507209   61445 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2084,"bootTime":1716398256,"procs":464,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:20.507292   61445 start.go:139] virtualization: kvm guest
	I0522 17:52:20.508818   61445 out.go:177] * [functional-164981] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:20.509999   61445 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:20.509996   61445 notify.go:220] Checking for updates...
	I0522 17:52:20.511121   61445 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:20.512387   61445 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:20.513509   61445 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:20.514584   61445 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:20.515621   61445 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:20.517241   61445 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:20.517936   61445 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:20.542234   61445 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:20.542321   61445 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:20.585377   61445 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-05-22 17:52:20.576800715 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:20.585555   61445 docker.go:295] overlay module found
	I0522 17:52:20.587381   61445 out.go:177] * Using the docker driver based on existing profile
	I0522 17:52:20.588510   61445 start.go:297] selected driver: docker
	I0522 17:52:20.588522   61445 start.go:901] validating driver "docker" against &{Name:functional-164981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-164981 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:20.588623   61445 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:20.590522   61445 out.go:177] 
	W0522 17:52:20.591611   61445 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0522 17:52:20.592872   61445 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-164981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-164981 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (144.535025ms)

                                                
                                                
-- stdout --
	* [functional-164981] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 17:52:20.361519   61366 out.go:291] Setting OutFile to fd 1 ...
	I0522 17:52:20.361598   61366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:20.361606   61366 out.go:304] Setting ErrFile to fd 2...
	I0522 17:52:20.361610   61366 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 17:52:20.361838   61366 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 17:52:20.362774   61366 out.go:298] Setting JSON to false
	I0522 17:52:20.364171   61366 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-10","uptime":2084,"bootTime":1716398256,"procs":465,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1060-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0522 17:52:20.364230   61366 start.go:139] virtualization: kvm guest
	I0522 17:52:20.366029   61366 out.go:177] * [functional-164981] minikube v1.33.1 sur Ubuntu 20.04 (kvm/amd64)
	I0522 17:52:20.367773   61366 out.go:177]   - MINIKUBE_LOCATION=18943
	I0522 17:52:20.367832   61366 notify.go:220] Checking for updates...
	I0522 17:52:20.370135   61366 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0522 17:52:20.371426   61366 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	I0522 17:52:20.372603   61366 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	I0522 17:52:20.373858   61366 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0522 17:52:20.375130   61366 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0522 17:52:20.376828   61366 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 17:52:20.377502   61366 driver.go:392] Setting default libvirt URI to qemu:///system
	I0522 17:52:20.401438   61366 docker.go:122] docker version: linux-26.1.3:Docker Engine - Community
	I0522 17:52:20.401519   61366 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 17:52:20.453318   61366 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:33 OomKillDisable:true NGoroutines:54 SystemTime:2024-05-22 17:52:20.442369807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 17:52:20.453465   61366 docker.go:295] overlay module found
	I0522 17:52:20.455409   61366 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0522 17:52:20.456526   61366 start.go:297] selected driver: docker
	I0522 17:52:20.456550   61366 start.go:901] validating driver "docker" against &{Name:functional-164981 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1715707529-18887@sha256:734a2adad159415bd51b0392bc8976bd7e3e543e9165c2374d6e59bac37aed3a Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.1 ClusterName:functional-164981 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.30.1 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0522 17:52:20.456667   61366 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0522 17:52:20.458907   61366 out.go:177] 
	W0522 17:52:20.460091   61366 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0522 17:52:20.461177   61366 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (21.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-164981 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-164981 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-2s767" [c04657e1-47ec-491a-85db-b436cdc5f484] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-2s767" [c04657e1-47ec-491a-85db-b436cdc5f484] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 21.003115574s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.49.2:31816
functional_test.go:1671: http://192.168.49.2:31816: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-2s767

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31816
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (21.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (41.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [fe341f2d-0c1a-451c-b23d-41f3a75a66a2] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.016464293s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-164981 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-164981 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-164981 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-164981 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [545af089-0499-4456-91f7-2cf47f488856] Pending
helpers_test.go:344: "sp-pod" [545af089-0499-4456-91f7-2cf47f488856] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [545af089-0499-4456-91f7-2cf47f488856] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.004184754s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-164981 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-164981 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-164981 delete -f testdata/storage-provisioner/pod.yaml: (1.194567426s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-164981 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [c1b5c056-3e9f-42ab-86f7-1db432008c51] Pending
helpers_test.go:344: "sp-pod" [c1b5c056-3e9f-42ab-86f7-1db432008c51] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [c1b5c056-3e9f-42ab-86f7-1db432008c51] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003903115s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-164981 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (41.98s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh -n functional-164981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cp functional-164981:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2114976394/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh -n functional-164981 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh -n functional-164981 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (23.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-164981 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-8qmnd" [310bbd35-d5df-47d8-913e-1ff832bae502] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-8qmnd" [310bbd35-d5df-47d8-913e-1ff832bae502] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 17.003633769s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;": exit status 1 (145.434046ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;": exit status 1 (198.853163ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;": exit status 1 (186.957465ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-164981 exec mysql-64454c8b5c-8qmnd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (23.45s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16668/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /etc/test/nested/copy/16668/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16668.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /etc/ssl/certs/16668.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16668.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /usr/share/ca-certificates/16668.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/166682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /etc/ssl/certs/166682.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/166682.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /usr/share/ca-certificates/166682.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-164981 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh "sudo systemctl is-active crio": exit status 1 (235.719285ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 57937: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-164981 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c8ae70fb-0422-45d2-b631-604bbb737d12] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c8ae70fb-0422-45d2-b631-604bbb737d12] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 20.003512054s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (20.27s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-164981 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.110.120.22 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-164981 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-164981 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-164981 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-n9mm5" [8a03df37-9a90-4420-a85e-b364604242c6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-n9mm5" [8a03df37-9a90-4420-a85e-b364604242c6] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003913781s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdany-port4061324569/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1716400338822062145" to /tmp/TestFunctionalparallelMountCmdany-port4061324569/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1716400338822062145" to /tmp/TestFunctionalparallelMountCmdany-port4061324569/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1716400338822062145" to /tmp/TestFunctionalparallelMountCmdany-port4061324569/001/test-1716400338822062145
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (249.588852ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May 22 17:52 created-by-test
-rw-r--r-- 1 docker docker 24 May 22 17:52 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May 22 17:52 test-1716400338822062145
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh cat /mount-9p/test-1716400338822062145
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-164981 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [8ac821e2-3012-4a15-b7c8-a39d0bc41f3e] Pending
helpers_test.go:344: "busybox-mount" [8ac821e2-3012-4a15-b7c8-a39d0bc41f3e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [8ac821e2-3012-4a15-b7c8-a39d0bc41f3e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0522 17:52:24.838586   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:24.844382   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:24.854621   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:24.874865   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:24.915155   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:24.995441   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:25.155834   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 17:52:25.476453   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [8ac821e2-3012-4a15-b7c8-a39d0bc41f3e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003498684s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-164981 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh stat /mount-9p/created-by-pod
E0522 17:52:26.118402   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdany-port4061324569/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.67s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "292.614672ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "40.120167ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "261.971965ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "41.335534ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service list
functional_test.go:1455: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 service list: (1.655976577s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.66s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 service list -o json: (1.668647536s)
functional_test.go:1490: Took "1.668761461s" to run "out/minikube-linux-amd64 -p functional-164981 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdspecific-port3378154352/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (272.269448ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdspecific-port3378154352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh "sudo umount -f /mount-9p": exit status 1 (249.957752ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-164981 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdspecific-port3378154352/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.99s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service --namespace=default --https --url hello-node
E0522 17:52:27.399452   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
functional_test.go:1518: found endpoint: https://192.168.49.2:31806
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T" /mount1: exit status 1 (373.129756ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh "findmnt -T" /mount3
E0522 17:52:29.960449   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-164981 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-164981 /tmp/TestFunctionalparallelMountCmdVerifyCleanup203636784/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.49.2:31806
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164981 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.1
registry.k8s.io/kube-proxy:v1.30.1
registry.k8s.io/kube-controller-manager:v1.30.1
registry.k8s.io/kube-apiserver:v1.30.1
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-164981
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-164981
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164981 image ls --format short --alsologtostderr:
I0522 17:52:45.183084   66112 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:45.183174   66112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.183182   66112 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:45.183186   66112 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.183403   66112 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:45.183918   66112 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.184008   66112 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.184370   66112 cli_runner.go:164] Run: docker container inspect functional-164981 --format={{.State.Status}}
I0522 17:52:45.205803   66112 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:45.205858   66112 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-164981
I0522 17:52:45.226124   66112 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/functional-164981/id_rsa Username:docker}
I0522 17:52:45.315233   66112 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
W0522 17:52:45.339604   66112 root.go:91] failed to log command end to audit: failed to find a log row with id equals to 7bf9ca2e-436b-4ab8-bd92-28dafe0eb229
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164981 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-apiserver              | v1.30.1           | 91be940803172 | 117MB  |
| registry.k8s.io/kube-scheduler              | v1.30.1           | a52dc94f0a912 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.1           | 747097150317f | 84.7MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | e784f4560448b | 188MB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| gcr.io/google-containers/addon-resizer      | functional-164981 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-164981 | 3d65941d78b52 | 30B    |
| registry.k8s.io/kube-controller-manager     | v1.30.1           | 25a1387cdab82 | 111MB  |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164981 image ls --format table --alsologtostderr:
I0522 17:52:45.378822   66317 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:45.379071   66317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.379081   66317 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:45.379085   66317 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.379233   66317 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:45.379764   66317 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.379856   66317 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.380202   66317 cli_runner.go:164] Run: docker container inspect functional-164981 --format={{.State.Status}}
I0522 17:52:45.399135   66317 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:45.399191   66317 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-164981
I0522 17:52:45.416170   66317 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/functional-164981/id_rsa Username:docker}
I0522 17:52:45.499135   66317 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164981 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"a52dc94f0a91256bde86a1c3027a16336bb8fea9304
f9311987066307996f035","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.1"],"size":"62000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.1"],"size":"111000000"},{"id":"e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070","repoDigests":[],"repoTags":["docker.io/library/nginx:
latest"],"size":"188000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-164981"],"size":"32900000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"3d65941d78b522ee12552134f6aa6c34a6a823d8746952c7e107724a9529c0c9","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-164981"],"size":"30"},{"id":"91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.1"],"si
ze":"117000000"},{"id":"747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.1"],"size":"84700000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164981 image ls --format json --alsologtostderr:
I0522 17:52:45.180199   66110 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:45.180573   66110 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.180605   66110 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:45.180619   66110 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.180926   66110 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:45.181668   66110 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.181827   66110 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.182410   66110 cli_runner.go:164] Run: docker container inspect functional-164981 --format={{.State.Status}}
I0522 17:52:45.201281   66110 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:45.201333   66110 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-164981
I0522 17:52:45.225134   66110 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/functional-164981/id_rsa Username:docker}
I0522 17:52:45.311331   66110 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-164981 image ls --format yaml --alsologtostderr:
- id: e784f4560448b14a66f55c26e1b4dad2c2877cc73d001b7cd0b18e24a700a070
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 747097150317f99937cabea484cff90097a2dbd79e7eb348b71dc0af879883cd
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.1
size: "84700000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 25a1387cdab82166df829c0b70761c10e2d2afce21a7bcf9ae4e9d71fe34ef2c
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.1
size: "111000000"
- id: a52dc94f0a91256bde86a1c3027a16336bb8fea9304f9311987066307996f035
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.1
size: "62000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 3d65941d78b522ee12552134f6aa6c34a6a823d8746952c7e107724a9529c0c9
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-164981
size: "30"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-164981
size: "32900000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 91be9408031725d89ff709fdf75a7666cedbf0d8831be4581310a879a096c71a
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.1
size: "117000000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164981 image ls --format yaml --alsologtostderr:
I0522 17:52:45.183310   66111 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:45.183408   66111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.183431   66111 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:45.183438   66111 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.183645   66111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:45.184155   66111 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.184240   66111 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.184581   66111 cli_runner.go:164] Run: docker container inspect functional-164981 --format={{.State.Status}}
I0522 17:52:45.203650   66111 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:45.203708   66111 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-164981
I0522 17:52:45.223973   66111 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/functional-164981/id_rsa Username:docker}
I0522 17:52:45.311441   66111 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 ssh pgrep buildkitd
E0522 17:52:45.321545   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-164981 ssh pgrep buildkitd: exit status 1 (250.222697ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image build -t localhost/my-image:functional-164981 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 image build -t localhost/my-image:functional-164981 testdata/build --alsologtostderr: (3.096196163s)
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-164981 image build -t localhost/my-image:functional-164981 testdata/build --alsologtostderr:
I0522 17:52:45.425903   66336 out.go:291] Setting OutFile to fd 1 ...
I0522 17:52:45.426168   66336 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.426178   66336 out.go:304] Setting ErrFile to fd 2...
I0522 17:52:45.426182   66336 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0522 17:52:45.426354   66336 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
I0522 17:52:45.426895   66336 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.427421   66336 config.go:182] Loaded profile config "functional-164981": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
I0522 17:52:45.427828   66336 cli_runner.go:164] Run: docker container inspect functional-164981 --format={{.State.Status}}
I0522 17:52:45.444681   66336 ssh_runner.go:195] Run: systemctl --version
I0522 17:52:45.444762   66336 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-164981
I0522 17:52:45.461304   66336 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32782 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/functional-164981/id_rsa Username:docker}
I0522 17:52:45.543173   66336 build_images.go:161] Building image from path: /tmp/build.2959236410.tar
I0522 17:52:45.543238   66336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0522 17:52:45.551017   66336 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2959236410.tar
I0522 17:52:45.553877   66336 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2959236410.tar: stat -c "%s %y" /var/lib/minikube/build/build.2959236410.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2959236410.tar': No such file or directory
I0522 17:52:45.553922   66336 ssh_runner.go:362] scp /tmp/build.2959236410.tar --> /var/lib/minikube/build/build.2959236410.tar (3072 bytes)
I0522 17:52:45.574734   66336 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2959236410
I0522 17:52:45.582097   66336 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2959236410 -xf /var/lib/minikube/build/build.2959236410.tar
I0522 17:52:45.589608   66336 docker.go:360] Building image: /var/lib/minikube/build/build.2959236410
I0522 17:52:45.589658   66336 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-164981 /var/lib/minikube/build/build.2959236410
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.3s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.9s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 1.0s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:b5cbdf75c06704a54dc94cf01e4636ea00f85e58b3dd7b4bc3d4d9b3128908a3 done
#8 naming to localhost/my-image:functional-164981 done
#8 DONE 0.0s
I0522 17:52:48.456167   66336 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-164981 /var/lib/minikube/build/build.2959236410: (2.866486199s)
I0522 17:52:48.456221   66336 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2959236410
I0522 17:52:48.464590   66336 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2959236410.tar
I0522 17:52:48.472065   66336 build_images.go:217] Built localhost/my-image:functional-164981 from /tmp/build.2959236410.tar
I0522 17:52:48.472094   66336 build_images.go:133] succeeded building to: functional-164981
I0522 17:52:48.472101   66336 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.106603484s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-164981
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-164981 docker-env) && out/minikube-linux-amd64 status -p functional-164981"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-164981 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr: (3.001068157s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 version --short
--- PASS: TestFunctional/parallel/Version/short (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr
E0522 17:52:35.081133   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
2024/05/22 17:52:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr: (2.381023128s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (2.029997268s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-164981
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-164981 image load --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr: (2.770019309s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image save gcr.io/google-containers/addon-resizer:functional-164981 /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image rm gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image load /home/jenkins/workspace/Docker_Linux_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-164981
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-164981 image save --daemon gcr.io/google-containers/addon-resizer:functional-164981 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-164981
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-164981
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-164981
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-164981
--- PASS: TestFunctional/delete_minikube_cached_images (0.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-828033 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (3.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-828033 status --output json -v=7 --alsologtostderr: exit status 7 (290.93323ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-828033","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-828033-m02","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":false}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:08:36.282958   85063 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:08:36.283046   85063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:36.283054   85063 out.go:304] Setting ErrFile to fd 2...
	I0522 18:08:36.283058   85063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:08:36.283205   85063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:08:36.283384   85063 out.go:298] Setting JSON to true
	I0522 18:08:36.283409   85063 mustload.go:65] Loading cluster: ha-828033
	I0522 18:08:36.283508   85063 notify.go:220] Checking for updates...
	I0522 18:08:36.283722   85063 config.go:182] Loaded profile config "ha-828033": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:08:36.283736   85063 status.go:255] checking status of ha-828033 ...
	I0522 18:08:36.284087   85063 cli_runner.go:164] Run: docker container inspect ha-828033 --format={{.State.Status}}
	I0522 18:08:36.300763   85063 status.go:330] ha-828033 host status = "Running" (err=<nil>)
	I0522 18:08:36.300791   85063 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:08:36.301027   85063 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033
	I0522 18:08:36.316116   85063 host.go:66] Checking if "ha-828033" exists ...
	I0522 18:08:36.316337   85063 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:08:36.316378   85063 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-828033
	I0522 18:08:36.331142   85063 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32787 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/ha-828033/id_rsa Username:docker}
	I0522 18:08:36.411790   85063 ssh_runner.go:195] Run: systemctl --version
	I0522 18:08:36.415191   85063 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:08:36.424761   85063 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:08:36.472288   85063 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:63 SystemTime:2024-05-22 18:08:36.462769625 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:08:36.472853   85063 kubeconfig.go:125] found "ha-828033" server: "https://192.168.49.254:8443"
	I0522 18:08:36.472881   85063 api_server.go:166] Checking apiserver status ...
	I0522 18:08:36.472917   85063 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:08:36.483251   85063 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2281/cgroup
	I0522 18:08:36.491010   85063 api_server.go:182] apiserver freezer: "13:freezer:/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2"
	I0522 18:08:36.491068   85063 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/a436ef1be4f0bcc4c6cece66086fcafb8ef0d4e86d20ac5b71808e52dccc7a18/kubepods/burstable/pod54b3b26e16a7ecb9b17fbc5a589bfe7d/71559235c302876e581e0fdc41d44de5f3b15baaa5f146d7a6f6a6b264e137c2/freezer.state
	I0522 18:08:36.498336   85063 api_server.go:204] freezer state: "THAWED"
	I0522 18:08:36.498360   85063 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0522 18:08:36.501756   85063 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0522 18:08:36.501776   85063 status.go:422] ha-828033 apiserver status = Running (err=<nil>)
	I0522 18:08:36.501786   85063 status.go:257] ha-828033 status: &{Name:ha-828033 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:08:36.501800   85063 status.go:255] checking status of ha-828033-m02 ...
	I0522 18:08:36.502019   85063 cli_runner.go:164] Run: docker container inspect ha-828033-m02 --format={{.State.Status}}
	I0522 18:08:36.518780   85063 status.go:330] ha-828033-m02 host status = "Running" (err=<nil>)
	I0522 18:08:36.518797   85063 host.go:66] Checking if "ha-828033-m02" exists ...
	I0522 18:08:36.519004   85063 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "ha-828033-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-828033-m02
	E0522 18:08:36.534325   85063 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:08:36.534355   85063 status.go:257] ha-828033-m02 status: &{Name:ha-828033-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:08:36.534368   85063 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp testdata/cp-test.txt ha-828033:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp ha-828033:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp ha-828033:/home/docker/cp-test.txt ha-828033-m02:/home/docker/cp-test_ha-828033_ha-828033-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033-m02 "sudo cat /home/docker/cp-test_ha-828033_ha-828033-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp testdata/cp-test.txt ha-828033-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1949047038/001/cp-test_ha-828033-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 cp ha-828033-m02:/home/docker/cp-test.txt ha-828033:/home/docker/cp-test_ha-828033-m02_ha-828033.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-828033 ssh -n ha-828033 "sudo cat /home/docker/cp-test_ha-828033-m02_ha-828033.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (3.71s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (20.95s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-245899 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-245899 --driver=docker  --container-runtime=docker: (20.946216013s)
--- PASS: TestImageBuild/serial/Setup (20.95s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (2.7s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-245899
image_test.go:78: (dbg) Done: out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-245899: (2.698732754s)
--- PASS: TestImageBuild/serial/NormalBuild (2.70s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-245899
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.92s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-245899
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.80s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.8s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-245899
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.80s)

                                                
                                    
x
+
TestJSONOutput/start/Command (77.36s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-043227 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-043227 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=docker: (1m17.357271982s)
--- PASS: TestJSONOutput/start/Command (77.36s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-043227 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.4s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-043227 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.40s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.77s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-043227 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-043227 --output=json --user=testUser: (10.765798203s)
--- PASS: TestJSONOutput/stop/Command (10.77s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.18s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-218184 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-218184 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (58.531615ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e980dfce-ac14-4055-a205-d991cd764f2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-218184] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"59b8b8b4-1887-42ba-91b6-f716761b296a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"88b69f7a-8d90-4900-b299-b8df0b8001e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"16f01c0c-8b33-4661-8269-7df819686fcb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig"}}
	{"specversion":"1.0","id":"f3173e94-e8a5-44d9-91c2-90d39e1adb87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube"}}
	{"specversion":"1.0","id":"51eeeed1-c563-48f6-ae4f-7eb955ea5e45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c303e290-4f58-43e8-8f20-76f549ac5d77","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"da7df3fe-d992-45bc-9683-9c8f47a000b0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-218184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-218184
--- PASS: TestErrorJSONOutput (0.18s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-535699 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-535699 --network=: (21.075735047s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-535699" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-535699
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-535699: (2.050863925s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.14s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (267.2s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-516201 --network=bridge
E0522 18:21:55.310373   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:22:24.838827   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 18:25:27.890530   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-516201 --network=bridge: (4m26.39764278s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-516201" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-516201
--- PASS: TestKicCustomNetwork/use_default_bridge_network (267.20s)

                                                
                                    
x
+
TestKicExistingNetwork (249.01s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-769777 --network=existing-network
E0522 18:26:55.310301   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 18:27:24.838912   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 18:29:58.356481   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-769777 --network=existing-network: (4m8.100977562s)
helpers_test.go:175: Cleaning up "existing-network-769777" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-769777
--- PASS: TestKicExistingNetwork (249.01s)

                                                
                                    
x
+
TestKicCustomSubnet (25.3s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-000956 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-000956 --subnet=192.168.60.0/24: (23.298275671s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-000956 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-000956" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-000956
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-000956: (1.985615966s)
--- PASS: TestKicCustomSubnet (25.30s)

                                                
                                    
x
+
TestKicStaticIP (25.96s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-885448 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-885448 --static-ip=192.168.200.200: (23.808716565s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-885448 ip
helpers_test.go:175: Cleaning up "static-ip-885448" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-885448
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-885448: (2.040205418s)
--- PASS: TestKicStaticIP (25.96s)

                                                
                                    
x
+
TestMainNoArgs (0.04s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.04s)

                                                
                                    
x
+
TestMinikubeProfile (51.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-515789 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-515789 --driver=docker  --container-runtime=docker: (23.719168736s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-518399 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-518399 --driver=docker  --container-runtime=docker: (23.203695482s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-515789
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-518399
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-518399" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-518399
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-518399: (1.94051667s)
helpers_test.go:175: Cleaning up "first-515789" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-515789
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-515789: (2.075058755s)
--- PASS: TestMinikubeProfile (51.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-736299 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-736299 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (9.013141334s)
E0522 18:31:55.310438   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
--- PASS: TestMountStart/serial/StartWithMountFirst (10.01s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-736299 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (9.47s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747898 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747898 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.468838937s)
--- PASS: TestMountStart/serial/StartWithMountSecond (9.47s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.22s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-736299 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-736299 --alsologtostderr -v=5: (1.438461306s)
--- PASS: TestMountStart/serial/DeleteFirst (1.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.23s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-747898
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-747898: (1.16424743s)
--- PASS: TestMountStart/serial/Stop (1.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747898
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747898: (7.188461262s)
--- PASS: TestMountStart/serial/RestartStopped (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747898 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-737786 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-737786 status --output json --alsologtostderr: exit status 7 (319.388582ms)

                                                
                                                
-- stdout --
	[{"Name":"multinode-737786","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"multinode-737786-m02","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":true},{"Name":"multinode-737786-m03","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Nonexistent","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0522 18:52:23.033843  182174 out.go:291] Setting OutFile to fd 1 ...
	I0522 18:52:23.034069  182174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:23.034078  182174 out.go:304] Setting ErrFile to fd 2...
	I0522 18:52:23.034082  182174 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0522 18:52:23.034254  182174 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18943-9771/.minikube/bin
	I0522 18:52:23.034462  182174 out.go:298] Setting JSON to true
	I0522 18:52:23.034489  182174 mustload.go:65] Loading cluster: multinode-737786
	I0522 18:52:23.034520  182174 notify.go:220] Checking for updates...
	I0522 18:52:23.034792  182174 config.go:182] Loaded profile config "multinode-737786": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.30.1
	I0522 18:52:23.034806  182174 status.go:255] checking status of multinode-737786 ...
	I0522 18:52:23.035165  182174 cli_runner.go:164] Run: docker container inspect multinode-737786 --format={{.State.Status}}
	I0522 18:52:23.051634  182174 status.go:330] multinode-737786 host status = "Running" (err=<nil>)
	I0522 18:52:23.051654  182174 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:52:23.051896  182174 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786
	I0522 18:52:23.068705  182174 host.go:66] Checking if "multinode-737786" exists ...
	I0522 18:52:23.069043  182174 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0522 18:52:23.069092  182174 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-737786
	I0522 18:52:23.084764  182174 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32897 SSHKeyPath:/home/jenkins/minikube-integration/18943-9771/.minikube/machines/multinode-737786/id_rsa Username:docker}
	I0522 18:52:23.163865  182174 ssh_runner.go:195] Run: systemctl --version
	I0522 18:52:23.167445  182174 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0522 18:52:23.177402  182174 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0522 18:52:23.221247  182174 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:49 OomKillDisable:true NGoroutines:73 SystemTime:2024-05-22 18:52:23.212681233 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1060-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647980544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-10 Labels:[] ExperimentalBuild:false ServerVersion:26.1.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e377cd56a71523140ca6ae87e30244719194a521 Expected:e377cd56a71523140ca6ae87e30244719194a521} RuncCommit:{ID:v1.1.12-0-g51d5e94 Expected:v1.1.12-0-g51d5e94} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerEr
rors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.14.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.27.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0522 18:52:23.221759  182174 kubeconfig.go:125] found "multinode-737786" server: "https://192.168.67.2:8443"
	I0522 18:52:23.221793  182174 api_server.go:166] Checking apiserver status ...
	I0522 18:52:23.221822  182174 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0522 18:52:23.232172  182174 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2247/cgroup
	I0522 18:52:23.240441  182174 api_server.go:182] apiserver freezer: "13:freezer:/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8"
	I0522 18:52:23.240503  182174 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/b522c5b4d434efd7058927cbc9467353a83f79e4160000c4ba53bdfbd119af9b/kubepods/burstable/pode26311d8d9ac20af7e4c2c1c5c36c4c2/6991b35c68003e5f39d49909b30c06c08dca754bce1b085c70269887d2a142a8/freezer.state
	I0522 18:52:23.247876  182174 api_server.go:204] freezer state: "THAWED"
	I0522 18:52:23.247904  182174 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0522 18:52:23.251491  182174 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0522 18:52:23.251511  182174 status.go:422] multinode-737786 apiserver status = Running (err=<nil>)
	I0522 18:52:23.251522  182174 status.go:257] multinode-737786 status: &{Name:multinode-737786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0522 18:52:23.251538  182174 status.go:255] checking status of multinode-737786-m02 ...
	I0522 18:52:23.251780  182174 cli_runner.go:164] Run: docker container inspect multinode-737786-m02 --format={{.State.Status}}
	I0522 18:52:23.267967  182174 status.go:330] multinode-737786-m02 host status = "Running" (err=<nil>)
	I0522 18:52:23.267990  182174 host.go:66] Checking if "multinode-737786-m02" exists ...
	I0522 18:52:23.268227  182174 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m02")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m02
	E0522 18:52:23.283399  182174 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:52:23.283432  182174 status.go:257] multinode-737786-m02 status: &{Name:multinode-737786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:52:23.283450  182174 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:52:23.283458  182174 status.go:255] checking status of multinode-737786-m03 ...
	I0522 18:52:23.283699  182174 cli_runner.go:164] Run: docker container inspect multinode-737786-m03 --format={{.State.Status}}
	I0522 18:52:23.298987  182174 status.go:330] multinode-737786-m03 host status = "Running" (err=<nil>)
	I0522 18:52:23.299014  182174 host.go:66] Checking if "multinode-737786-m03" exists ...
	I0522 18:52:23.299246  182174 cli_runner.go:164] Run: docker container inspect -f "{{with (index .NetworkSettings.Networks "multinode-737786-m03")}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-737786-m03
	E0522 18:52:23.314740  182174 status.go:352] failed to get driver ip: getting IP: container addresses should have 2 values, got 1 values: []
	I0522 18:52:23.314765  182174 status.go:257] multinode-737786-m03 status: &{Name:multinode-737786-m03 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Nonexistent Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0522 18:52:23.314784  182174 status.go:260] status error: getting IP: container addresses should have 2 values, got 1 values: []

                                                
                                                
** /stderr **
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp testdata/cp-test.txt multinode-737786:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786:/home/docker/cp-test.txt multinode-737786-m02:/home/docker/cp-test_multinode-737786_multinode-737786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test_multinode-737786_multinode-737786-m02.txt"
E0522 18:52:24.838761   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786:/home/docker/cp-test.txt multinode-737786-m03:/home/docker/cp-test_multinode-737786_multinode-737786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test_multinode-737786_multinode-737786-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp testdata/cp-test.txt multinode-737786-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt multinode-737786:/home/docker/cp-test_multinode-737786-m02_multinode-737786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test_multinode-737786-m02_multinode-737786.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m02:/home/docker/cp-test.txt multinode-737786-m03:/home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test_multinode-737786-m02_multinode-737786-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp testdata/cp-test.txt multinode-737786-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3523291210/001/cp-test_multinode-737786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt multinode-737786:/home/docker/cp-test_multinode-737786-m03_multinode-737786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786 "sudo cat /home/docker/cp-test_multinode-737786-m03_multinode-737786.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 cp multinode-737786-m03:/home/docker/cp-test.txt multinode-737786-m02:/home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-737786 ssh -n multinode-737786-m02 "sudo cat /home/docker/cp-test_multinode-737786-m03_multinode-737786-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.97s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-737786
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-737786-m03 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-737786-m03 --driver=docker  --container-runtime=docker: exit status 14 (55.120024ms)

                                                
                                                
-- stdout --
	* [multinode-737786-m03] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-737786-m03' is duplicated with machine name 'multinode-737786-m03' in profile 'multinode-737786'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-737786-m04 --driver=docker  --container-runtime=docker
E0522 19:01:55.309991   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-737786-m04 --driver=docker  --container-runtime=docker: (23.90639832s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-737786
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-737786: exit status 80 (250.352308ms)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster multinode-737786 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-737786-m04 already exists in multinode-737786-m04 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-737786-m04
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-737786-m04: (1.61618289s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.87s)

                                                
                                    
x
+
TestPreload (170.77s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4
E0522 19:02:24.838347   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 19:03:18.359402   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377292 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.24.4: (2m3.33483043s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377292 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-377292 image pull gcr.io/k8s-minikube/busybox: (2.051846975s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-377292
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-377292: (10.579417331s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-377292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-377292 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (32.580823467s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-377292 image list
helpers_test.go:175: Cleaning up "test-preload-377292" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-377292
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-377292: (2.033830287s)
--- PASS: TestPreload (170.77s)

                                                
                                    
x
+
TestScheduledStopUnix (96.51s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-402422 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-402422 --memory=2048 --driver=docker  --container-runtime=docker: (23.792760593s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402422 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-402422 -n scheduled-stop-402422
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402422 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402422 -n scheduled-stop-402422
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-402422
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-402422 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-402422
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-402422: exit status 7 (57.187581ms)

                                                
                                                
-- stdout --
	scheduled-stop-402422
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402422 -n scheduled-stop-402422
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-402422 -n scheduled-stop-402422: exit status 7 (55.77759ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-402422" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-402422
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-402422: (1.572327218s)
--- PASS: TestScheduledStopUnix (96.51s)

                                                
                                    
x
+
TestSkaffold (110.86s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1215748780 version
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-862715 --memory=2600 --driver=docker  --container-runtime=docker
E0522 19:06:55.310047   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-862715 --memory=2600 --driver=docker  --container-runtime=docker: (21.591190574s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1215748780 run --minikube-profile skaffold-862715 --kube-context skaffold-862715 --status-check=true --port-forward=false --interactive=false
E0522 19:07:24.838555   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1215748780 run --minikube-profile skaffold-862715 --kube-context skaffold-862715 --status-check=true --port-forward=false --interactive=false: (1m12.587912814s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-5458887b4-l9rt8" [890c28ca-cf55-495b-9afc-a17669a60a06] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003430275s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-59f46c94f6-q8kzc" [4e7e1f8b-3e43-4b28-bcdb-09216e531524] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.002835683s
helpers_test.go:175: Cleaning up "skaffold-862715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-862715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-862715: (2.641588841s)
--- PASS: TestSkaffold (110.86s)

                                                
                                    
x
+
TestInsufficientStorage (12.56s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-626764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-626764 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (10.48046325s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"9ef6cb83-6f4f-4a83-aa91-33bf3d24fd10","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-626764] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"dd5f8bba-0723-4b09-a544-f69de2451b87","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18943"}}
	{"specversion":"1.0","id":"1bf7427e-7c03-48ec-b3b6-3fd63244e32a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cb15896a-d150-4909-a18e-f6aa51210ab4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig"}}
	{"specversion":"1.0","id":"b926d740-fe3e-4fa1-97b4-b9fd59721d3a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube"}}
	{"specversion":"1.0","id":"c16c82ac-3df1-4b7d-be2b-5546ca880656","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8bceb880-3810-4b57-8028-160b37be2e00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2f9341c4-710a-4a7f-89c5-b56cbed41483","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"6b63455e-ee2f-49cb-905d-9186682c8b22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"b6a2edaf-8c76-4565-b574-db254bf00e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"805ddeaa-7691-4c6e-a959-6bd2cf6e6dc1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"d5c42a65-945b-4e7d-903a-35727c937b64","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-626764\" primary control-plane node in \"insufficient-storage-626764\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e75f6848-262a-4811-81df-3f1ac01a928f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.44-1715707529-18887 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"45ddf075-1094-4a76-8861-b2e440f4d0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f970e2fe-211f-4c22-817a-9262c31f2f4d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-626764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-626764 --output=json --layout=cluster: exit status 7 (239.607624ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-626764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-626764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0522 19:08:37.655260  238943 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-626764" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-626764 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-626764 --output=json --layout=cluster: exit status 7 (237.876522ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-626764","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-626764","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0522 19:08:37.893873  239036 status.go:417] kubeconfig endpoint: get endpoint: "insufficient-storage-626764" does not appear in /home/jenkins/minikube-integration/18943-9771/kubeconfig
	E0522 19:08:37.903037  239036 status.go:560] unable to read event log: stat: stat /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/insufficient-storage-626764/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-626764" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-626764
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-626764: (1.599425434s)
--- PASS: TestInsufficientStorage (12.56s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (141.71s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2990549232 start -p running-upgrade-041159 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2990549232 start -p running-upgrade-041159 --memory=2200 --vm-driver=docker  --container-runtime=docker: (1m48.130791571s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-041159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-041159 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.669634541s)
helpers_test.go:175: Cleaning up "running-upgrade-041159" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-041159
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-041159: (2.346433982s)
--- PASS: TestRunningBinaryUpgrade (141.71s)

                                                
                                    
x
+
TestKubernetesUpgrade (338.9s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.52193095s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-310569
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-310569: (5.274025051s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-310569 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-310569 status --format={{.Host}}: exit status 7 (75.662772ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m33.449162172s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-310569 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=docker: exit status 106 (62.051411ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-310569] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-310569
	    minikube start -p kubernetes-upgrade-310569 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3105692 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.1, by running:
	    
	    minikube start -p kubernetes-upgrade-310569 --kubernetes-version=v1.30.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-310569 --memory=2200 --kubernetes-version=v1.30.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.175060582s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-310569" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-310569
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-310569: (2.286390576s)
--- PASS: TestKubernetesUpgrade (338.90s)

                                                
                                    
x
+
TestMissingContainerUpgrade (182.34s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1235047151 start -p missing-upgrade-992132 --memory=2200 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1235047151 start -p missing-upgrade-992132 --memory=2200 --driver=docker  --container-runtime=docker: (1m51.232218632s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-992132
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-992132: (13.976950019s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-992132
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-992132 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-992132 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (52.62641499s)
helpers_test.go:175: Cleaning up "missing-upgrade-992132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-992132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-992132: (2.04939892s)
--- PASS: TestMissingContainerUpgrade (182.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.55s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.72s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3766153892 start -p stopped-upgrade-415350 --memory=2200 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3766153892 start -p stopped-upgrade-415350 --memory=2200 --vm-driver=docker  --container-runtime=docker: (50.267987082s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3766153892 -p stopped-upgrade-415350 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3766153892 -p stopped-upgrade-415350 stop: (12.965419991s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-415350 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-415350 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (26.481502629s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-415350
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-415350: (1.050587963s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.05s)

                                                
                                    
x
+
TestPause/serial/Start (45.76s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-288620 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-288620 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (45.75916739s)
--- PASS: TestPause/serial/Start (45.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=docker: exit status 14 (59.095773ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-503443] minikube v1.33.1 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18943
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18943-9771/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18943-9771/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (25.69s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503443 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503443 --driver=docker  --container-runtime=docker: (25.422807854s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-503443 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (25.69s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --driver=docker  --container-runtime=docker
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --driver=docker  --container-runtime=docker: (14.562131124s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-503443 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-503443 status -o json: exit status 2 (265.55115ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-503443","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-503443
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-503443: (1.724642982s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.55s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (31.74s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-288620 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-288620 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (31.723964909s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (31.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (6.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --driver=docker  --container-runtime=docker
E0522 19:13:13.287264   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.292600   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.302863   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.323151   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.363432   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.443769   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.604142   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:13.924465   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:14.565451   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503443 --no-kubernetes --driver=docker  --container-runtime=docker: (6.124921023s)
--- PASS: TestNoKubernetes/serial/Start (6.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-503443 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-503443 "sudo systemctl is-active --quiet service kubelet": exit status 1 (273.874992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
E0522 19:13:15.845883   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.92587893s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-503443
E0522 19:13:18.406401   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-503443: (1.170616025s)
--- PASS: TestNoKubernetes/serial/Stop (1.17s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-503443 --driver=docker  --container-runtime=docker
E0522 19:13:23.526977   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-503443 --driver=docker  --container-runtime=docker: (7.371778147s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.37s)

                                                
                                    
x
+
TestPause/serial/Pause (0.48s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-288620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.48s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-288620 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-288620 --output=json --layout=cluster: exit status 2 (289.838582ms)

                                                
                                                
-- stdout --
	{"Name":"pause-288620","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-288620","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-288620 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.64s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-288620 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.64s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.16s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-288620 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-288620 --alsologtostderr -v=5: (2.156713857s)
--- PASS: TestPause/serial/DeletePaused (2.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-503443 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-503443 "sudo systemctl is-active --quiet service kubelet": exit status 1 (252.01597ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-288620
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-288620: exit status 1 (17.201974ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-288620: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (77.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m17.814098362s)
--- PASS: TestNetworkPlugins/group/auto/Start (77.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (53.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0522 19:13:33.767114   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:13:54.248266   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (53.196356925s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (53.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: (1m7.638834725s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sclsp" [eb0508fc-5b05-4136-a878-4ab1d4badf5d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003598335s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-kpzjm" [197a17bf-ddc8-4272-a6df-832541079575] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-kpzjm" [197a17bf-ddc8-4272-a6df-832541079575] Running
E0522 19:14:35.208878   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004425588s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-qr52f" [656e2598-c40b-46fb-8eba-e8143353527d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-qr52f" [656e2598-c40b-46fb-8eba-e8143353527d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.00378394s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (58.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (58.568560716s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (58.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (43.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (43.542200003s)
--- PASS: TestNetworkPlugins/group/false/Start (43.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (76.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0522 19:15:27.895041   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m16.673935706s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (76.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gphf5" [25c35563-c299-49b5-b5cf-edb100f045b4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005748287s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-65qpk" [221db056-6b7d-45e5-9f60-7d305af3fac5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-65qpk" [221db056-6b7d-45e5-9f60-7d305af3fac5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003349816s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cfqwc" [9369a33b-ff93-46fb-8331-e6dba540496b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0522 19:15:57.129373   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-cfqwc" [9369a33b-ff93-46fb-8331-e6dba540496b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003806816s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-6vcl4" [4e40fd0b-1fda-4085-87d3-b7dcf7640c9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-6vcl4" [4e40fd0b-1fda-4085-87d3-b7dcf7640c9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003494049s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (52.907920859s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (41.780764202s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.78s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (79.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-243275 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m19.453266175s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (79.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-j8r7q" [f8b24c39-340a-4232-af30-a9e0140a5780] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-j8r7q" [f8b24c39-340a-4232-af30-a9e0140a5780] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003419784s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-78zcb" [a4daf240-af44-4e25-b14c-035f4e7b14a4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003801288s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-vvdpv" [b5b062f5-4c87-4b45-86d9-7c859f976735] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-vvdpv" [b5b062f5-4c87-4b45-86d9-7c859f976735] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003997149s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (146.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-694425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m26.431148257s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (146.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-gmmmp" [f2820aa5-0644-4d8f-acb6-fa235c276973] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-gmmmp" [f2820aa5-0644-4d8f-acb6-fa235c276973] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003532224s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (54.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-742362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-742362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (54.124292872s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (54.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (44.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-531421 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-531421 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (44.07959484s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (44.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-243275 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-243275 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-p6z7c" [70a6864e-e223-417f-b36d-ac036887b8f5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-p6z7c" [70a6864e-e223-417f-b36d-ac036887b8f5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.003341677s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-243275 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-243275 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-384495 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-384495 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (38.474924153s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (38.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (11.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-531421 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [8023af38-3c87-433a-a705-00af833dd2c4] Pending
helpers_test.go:344: "busybox" [8023af38-3c87-433a-a705-00af833dd2c4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [8023af38-3c87-433a-a705-00af833dd2c4] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 11.002812069s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-531421 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (11.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742362 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [d15848d0-bc51-4db3-9448-c4b77d85436b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [d15848d0-bc51-4db3-9448-c4b77d85436b] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003135469s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-742362 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-531421 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-531421 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.7s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-531421 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-531421 --alsologtostderr -v=3: (10.700800842s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-742362 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-742362 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-742362 --alsologtostderr -v=3
E0522 19:18:40.970256   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-742362 --alsologtostderr -v=3: (10.742783638s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531421 -n embed-certs-531421
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531421 -n embed-certs-531421: exit status 7 (60.089868ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-531421 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-531421 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-531421 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m22.481850418s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-531421 -n embed-certs-531421
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742362 -n no-preload-742362
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742362 -n no-preload-742362: exit status 7 (63.56556ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-742362 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (263.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-742362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-742362 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m22.805492553s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-742362 -n no-preload-742362
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (263.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-384495 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [87a23e23-7b7e-40b1-b2f2-a1f98a6572cf] Pending
helpers_test.go:344: "busybox" [87a23e23-7b7e-40b1-b2f2-a1f98a6572cf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [87a23e23-7b7e-40b1-b2f2-a1f98a6572cf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003470472s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-384495 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-384495 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-384495 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-384495 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-384495 --alsologtostderr -v=3: (10.649756697s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.65s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495: exit status 7 (115.825561ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-384495 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-384495 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0522 19:19:22.631850   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.637136   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.647260   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.667561   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.708323   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.789138   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:22.949519   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:23.270692   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:23.911423   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:25.191546   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:27.752052   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:19:32.872215   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-384495 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (4m22.47017731s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (262.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694425 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [52bba280-7f94-402e-a180-754ef6909444] Pending
helpers_test.go:344: "busybox" [52bba280-7f94-402e-a180-754ef6909444] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [52bba280-7f94-402e-a180-754ef6909444] Running
E0522 19:19:43.112608   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003122052s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-694425 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-694425 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-694425 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (10.74s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-694425 --alsologtostderr -v=3
E0522 19:19:47.099987   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.105246   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.115490   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.135851   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.176140   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.256458   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.416829   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:47.737397   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:48.378270   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:49.658823   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:52.219927   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-694425 --alsologtostderr -v=3: (10.738287963s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (10.74s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694425 -n old-k8s-version-694425
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694425 -n old-k8s-version-694425: exit status 7 (84.602221ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-694425 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (123.33s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-694425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0
E0522 19:19:57.340264   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:19:58.360465   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
E0522 19:20:03.593480   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:20:07.581019   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:20:28.061921   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:20:29.508085   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.513336   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.523560   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.543793   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.584060   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.664344   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:29.824979   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:30.145435   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:30.786327   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:32.066669   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:34.627615   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:39.747901   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:44.553975   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:20:49.988598   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:20:55.364493   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.369776   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.380010   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.400250   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.440496   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.520801   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:55.681195   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:56.002090   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:56.642897   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:20:57.923551   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:21:00.484382   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:21:00.812745   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:00.817991   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:00.828233   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:00.848490   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:00.888751   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:00.968992   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:01.129378   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:01.449923   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:02.090242   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:03.370909   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:05.604699   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:21:05.931067   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:09.022645   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:21:10.469619   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:21:11.051584   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:15.845170   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:21:21.292269   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:36.326105   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:21:39.164604   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.169841   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.180082   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.200322   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.240569   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.320907   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.481320   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:39.801975   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:40.442212   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:41.722598   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:41.773295   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:21:44.283690   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:49.404184   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:21:51.430474   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
E0522 19:21:55.310588   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/functional-164981/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-694425 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.0: (2m3.043156277s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-694425 -n old-k8s-version-694425
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (123.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dpk6h" [22e47805-b084-4e4c-8b23-8b51721bc80c] Running
E0522 19:21:58.982766   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:58.988001   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:58.998249   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.018475   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.058791   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.139111   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.299504   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.620270   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:21:59.644459   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:22:00.261039   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:22:01.542003   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:22:04.103034   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004046311s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-dpk6h" [22e47805-b084-4e4c-8b23-8b51721bc80c] Running
E0522 19:22:06.474342   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kindnet-243275/client.crt: no such file or directory
E0522 19:22:07.610124   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.615350   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.625574   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.645807   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.686059   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.766956   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:07.927362   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:08.247896   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:08.888915   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:09.223697   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003327002s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-694425 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-694425 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-694425 --alsologtostderr -v=1
E0522 19:22:10.169931   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694425 -n old-k8s-version-694425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694425 -n old-k8s-version-694425: exit status 2 (273.458999ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-694425 -n old-k8s-version-694425
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-694425 -n old-k8s-version-694425: exit status 2 (274.023306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-694425 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-694425 -n old-k8s-version-694425
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-694425 -n old-k8s-version-694425
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (38.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-994581 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0522 19:22:17.286474   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/custom-flannel-243275/client.crt: no such file or directory
E0522 19:22:17.851423   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:19.464135   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:22:20.124611   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:22:22.733651   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
E0522 19:22:24.838845   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/addons-340431/client.crt: no such file or directory
E0522 19:22:28.091744   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:30.943416   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/auto-243275/client.crt: no such file or directory
E0522 19:22:39.945037   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
E0522 19:22:48.572319   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/bridge-243275/client.crt: no such file or directory
E0522 19:22:51.696028   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:51.701280   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:51.711539   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:51.731788   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:51.772108   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:51.852491   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:52.013435   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:52.333894   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-994581 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (38.167522807s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (38.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-994581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0522 19:22:52.974492   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-994581 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.021492255s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (5.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-994581 --alsologtostderr -v=3
E0522 19:22:54.255179   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:22:56.816033   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-994581 --alsologtostderr -v=3: (5.651956382s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (5.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994581 -n newest-cni-994581
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994581 -n newest-cni-994581: exit status 7 (60.980975ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-994581 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (14.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-994581 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1
E0522 19:23:01.085731   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/enable-default-cni-243275/client.crt: no such file or directory
E0522 19:23:01.936342   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-994581 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.30.1: (14.367151506s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-994581 -n newest-cni-994581
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (14.65s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bbp6k" [03e80fc4-109a-47d4-8696-01cfd0fa3da3] Running
E0522 19:23:12.177305   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/kubenet-243275/client.crt: no such file or directory
E0522 19:23:13.287803   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/skaffold-862715/client.crt: no such file or directory
E0522 19:23:13.351004   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/calico-243275/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003871896s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-994581 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-994581 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994581 -n newest-cni-994581
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994581 -n newest-cni-994581: exit status 2 (267.818041ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994581 -n newest-cni-994581
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994581 -n newest-cni-994581: exit status 2 (267.034632ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-994581 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-994581 -n newest-cni-994581
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-994581 -n newest-cni-994581
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mvswx" [c0cc4213-c946-45c5-8a71-6606e8521f69] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004433285s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-bbp6k" [03e80fc4-109a-47d4-8696-01cfd0fa3da3] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00385661s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-531421 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-531421 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-531421 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531421 -n embed-certs-531421
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531421 -n embed-certs-531421: exit status 2 (273.760828ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531421 -n embed-certs-531421
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531421 -n embed-certs-531421: exit status 2 (264.816943ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-531421 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-531421 -n embed-certs-531421
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-531421 -n embed-certs-531421
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mvswx" [c0cc4213-c946-45c5-8a71-6606e8521f69] Running
E0522 19:23:20.905870   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/flannel-243275/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004344695s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-742362 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-742362 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-742362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742362 -n no-preload-742362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742362 -n no-preload-742362: exit status 2 (258.848843ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742362 -n no-preload-742362
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742362 -n no-preload-742362: exit status 2 (263.850396ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-742362 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-742362 -n no-preload-742362
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-742362 -n no-preload-742362
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mw84l" [9e3734e1-5458-417c-be22-4eabf96e9fb5] Running
E0522 19:23:44.654321   16668 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18943-9771/.minikube/profiles/false-243275/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003030001s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-mw84l" [9e3734e1-5458-417c-be22-4eabf96e9fb5] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003656376s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-384495 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-384495 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-384495 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495: exit status 2 (266.104033ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495: exit status 2 (263.695736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-384495 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-384495 -n default-k8s-diff-port-384495
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.21s)

                                                
                                    

Test skip (20/342)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.30.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-243275 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-243275" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-243275

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-243275" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-243275"

                                                
                                                
----------------------- debugLogs end: cilium-243275 [took: 3.262703932s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-243275" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-243275
--- SKIP: TestNetworkPlugins/group/cilium (3.46s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-244917" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-244917
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard